First of all, by considering the tests as independent:
If you consider as error, getting one particular rejection of the null hypothesis wrong, then the error rate is 0.05. No need for correction of multiple comparisons.
If you consider as error, getting one or more rejection of the null wrong, then the error rate is $1-(0.95)^{nbTest}$. Multiple correction needed to keep a low (0.05) type 1 error rate.
So it primilarly depends of the context. Nevertheless, it easy here to see that strong conclusions from data only cannot be reached without correcting for multiple comparisons.
Secondly, having considered this point, you should correct for the number of hypothesis to test when :
- doing multiple test on a single data set.
- doing a same test (or different ones) on different data sets.
One way to illustrate 1 is to consider the situation:
perform a test with $H_0:{u \le 0}$ Vs $H_1:{u>0}$ and a test with $H_0:{u \ge 0}$ Vs $H_1:{u<0}$ for the same data. Each of this test is a one-tailed one and thus basically divide the p-value of the two-tailed one by 2. Nevertheless, the conclusion of these tests must be equal to the one from the test $H_0:{u=0}$ Vs $H_1:{u \ne 0}$ and thus the "two one tailed p-value" must be remultiplied by the number of tests i.e. 2.
The post Explain the xkcd jelly bean comic: What makes it funny? illustrates (with pictures!) point 2.