I have been wondering about the F test that is provided by many statistical packages along with the standard regression output. As I understand it, F can be computed by
$$ F_{df_{reg},df_{res}} = \frac{R^2/df_{reg}}{(1-R^2)/df_{res}}. $$
The hypothesis tested by this test can be formulated in two different ways:
$H_{0}$: P$^2 = 0$
$H_{1}$: P$^2 > 0$
or
$H_{0}$: All $\beta_{i} = 0$
$H_{1}$: One or more $\beta_{i} \neq 0$
The first two hypotheses seem to suggest that the F test is one-tailed, which seems to be inline with my intuition since $R^2$ can not take negative values.
The second set of hypotheses, however, suggest two-tailed testing. They, to me, also seem to suggest a direct correspondence between the outcome of the F test for the entire model ($R^2$) and the t tests for the individual coefficients (which I know may not always be the case).
So my question comes down to this: Is the F test, testing whether a model explains a significant amount of variance compared to the null model, a one-tailed or a two-tailed test and why?