If you have five groups and you wish to know if there is a significant difference between any of the group means, you would have to do 10 pairwise comparisons to test all possible pairs of means. You would have to test
Mean for group 1 vs. mean for group 2
Mean for group 1 vs. mean for group 3
Mean for group 1 vs. mean for group 4
Mean for group 1 vs. mean for group 5
Mean for group 2 vs. mean for group 3
Mean for group 2 vs. mean for group 4
Mean for group 2 vs. mean for group 5
Mean for group 3 vs. mean for group 4
Mean for group 3 vs. mean for group 5
Mean for group 4 vs. mean for group 5
If you do each of these tests at the alpha = .05 level, you have 5 chances in a hundred of concluding the difference is significant when it is really just due to chance. If you 10 such tests you have 10 times .05 = .50 or a 50/50 chance that one will be significant just by chance. This is an unacceptable error rate. We could use a Bonferroni correction to adjust the significance levels we're hoping to detect such that we won't have a false Type I error being alerted.
Alternatively, we could also use ANOVA to compare all of these groups.
Aside from computational limitations, what advantage does ANOVA have over pairwise Bonferroni corrected t-tests?