If you want to test whether a change to a website has led to a difference in the conversion-rate, you have to perform an A/B-Test with
- group A: original site, aka control
- group B: site including the change
After the test has run several days you can compare the conversion-rates of both groups using e.g. the $\chi^2$-test, as suggested by Greg.
Why ? What is wrong with comparing the results before and after the change, i.e. performing a sequential test ?
The reason are confounders. In order to make the results comparable you have to
- either guarantee, that the only thing which has changed is the change to the site
- or control everything else by which may affect the conversion-rate by randomly assigning the participants / visitors to one or another group (which is what an A/B-Test does).
Here are some examples what can go wrong when a "sequential test" is performed:
- You change the site friday evening, so you basically compare the working days with the weekend
- You change the site right before a sale starts (assuming an e-commerce-site)
- You change the site color from blue to green right before or during the "National Blue Celebretation Day".
- Some days before the change a special advertising has lead to an increased traffic with increased conversion-rate
- Around the change the wheather in the geographic region with the most clickers also changes, influencing the mood and hence the results (e.g. if the site is about sunblockers)
etc.etc.
In fact, in the fascinating, (messy) data-generating environment of the internet, it is nearly impossible to control all confounders by thinking of them and excluding them beforehand. Hence, use A/B-Tests.
You might find more about this interesting subject by browsing the tag here on crossvalidated.