Many sources emphasize the importance of the F-test p-value in multiple regression and justify this in terms of p-hacking. It's kind of intuitive that if you can't reject the null hypothesis that all coefficients are 0, then it's silly to conclude that any individual coefficient is nonzero. And the false discovery rate for any statistical test is a fine explanation for why you might end up with a very small p-value on an individual coefficient even if the F-test p-value is large.
But there is an alternative story which is that you can avoid p-hacking by applying a Bonferroni correction. This too makes sense.
So what exactly is the relationship between the F-test and the Bonferroni correction? Don't they both kind of solve the same problem? If so, why do we need both?
I'm especially interested in the following edge case: Suppose you observe a regression with, say, the following figures:
- 5 predictors
- F-test p-value = 0.2
- one predictor's p-value is 0.002.
Now the F-test says 'nothing to see here'. But if we use Bonferroni instead, we end up with that coefficient having a p-value of 0.01, which is quite significant by any conventional standard. So which do you believe?