If you look at the F statistic (e.g. Proof that F-statistic follows F-distribution) you may observe that it is a quadratic form - so negative as well as positive differences get squared to produce large positive values.
Hence, large values of the test statistic provide evidence that the restrictions imposed by the null are false. Small values of the test statistic, in turn, imply that the estimates are very close to the values implied by the null, hence no evidence at all against the null.
Now, a large value of the test statistic which is constructed as a square may arise both when the true coefficient is less that the value specified in the null and when the true coefficient is larger than the value specified in the null. Hence, the (default version of the) F test indeed tests the two-sided null that the underlying coefficients (or more precisely, at least one of them) are different from zero.
By the "default version", I mean the one that tests that the slope coefficients other than the coefficient on the constant are zero, $H_0:\boldsymbol{\beta}=\boldsymbol{0}$. But the general idea does not depend on that in that you can of course test all sorts of multiple linear restrictions with an F-test, and the large values of the F test statistic indicate significant departures from the null hypothesis in either direction.
Due to squaring, you however no longer know in which direction the departure arose (nor, in the F-test, from which coefficient(s)). Much like when you seek the number that, upon squaring, gave you the value 4, you do not know if it is -2 or 2.