You could do a chi squared test of independence on number of responses from two mail campaigns, but I can't tell if that's what your linked website is doing. I can't seem to get it to behave the same as a chi squared test, given the same input for each...I don't think I'd trust it.
Anyway, here's a chi squared calculator.
Since you're interested in testing whether the response rates [of] the two mail campaigns are significant[ly different] (correct me if I'm putting the wrong words in your mouth here), you're interested in testing the fit of an independence model. I.e. to ask, if you assume the response rates of your two campaigns are independent of the distinction between the campaigns, what is the probability of observing response rates as different as your two rates are, or of the difference being even larger? If the rates are independent of the distinction between campaigns, you should expect both rates to be equal. Any difference from equality is evidence against your independence model, but you'll want either a very large difference or a lot of data (or some lesser mix of both) to establish a statistically significant difference (i.e., evidence that your data's deviation from the independence model was unlikely to result from sampling error if the independence model describes reality accurately).
The chi squared test of independence formula for your case will be as follows:
$$\chi^2=\Sigma\frac{(observed-expected)^2}{expected}$$
For example, if you had 114 and 89 responses out of 1,000 mailings for each of your two campaigns:
$$expected\:response = \frac{114+89}2=101.5$$
$$expected\:non{-}response = \frac{886+911}2=1,000-101.5=898.5$$
$$\chi^2=\frac{(114-101.5)^2+(89-101.5)^2}{101.5}+\frac{(911-898.5)^2+(886-898.5)^2}{898.5}\dots$$
$$\chi^2=\frac{2(12.5)^2}{101.5}+\frac{2(12.5)^2}{898.5}=3.427$$
This checks out with the chi squared calculator I linked to, which also gives you a two-tailed $p=.064$. That's your $\chi^2$'s statistical significance: the chance you'd get a difference as big as, or bigger than the difference between your observations and expectations (all 12.5 in this example) if you took another sample of the same size as yours from an overall population that, in total, actually responds equally to both campaigns.
Most hypothesis testers like to set a rule before performing their significance tests that they'll reject the null hypothesis model if there's less than a 5% chance of getting data that violates the model's expectations as much as or more than their sample data does if the model is true of the population from which the sample was randomly selected. It's not really necessary (and may even be inadvisable) to just dichotomize your attitude toward the null hypothesis model into a decision of whether to reject it wholesale, but that's the conventional approach, so it's probably what others would expect you to do. In this case, your $p>.05$, so you "couldn't reject the null" (and I'm not sure why that website of yours says otherwise with the same example data).
It's worth noting that there's a Bayesian alternative, whereas the significance testing approach I've just described follows frequentist theory. One may also choose to be more interested in estimating the effect size and the level of confidence with which one can place that estimate within a confidence interval (or margin of error $\times2$) first and foremost, and consider the question of whether the margin of error includes the null hypothesis as a secondary concern. Lots of deeper issues to think about here, but I'll leave them to other discussions for now.