I was running a regression with 1000 observations and 7 explanatory variables (with a constant term included as a "variable"). So, the population model initially stated was:
$Y_{i} = \beta_{1} + \beta_{2}X_{2i} + ....+ \beta_{6}X_{6i} + u_{i}$
So the sample model is:
$\hat{Y_{i}} = b_{1} + b_{2}X_{2i} + ....+ b_{6}X_{6i}$
(Estimated via OLS, meaning that I used the Gauss-Markov assumptions: https://en.wikipedia.org/wiki/Gauss%E2%80%93Markov_theorem )
After performing an individually significant test over $\beta_{6}$ i got the result that I can not reject the Null Hypothesis of $\beta_{6} = 0$.
But, if I make a F-test, of joint significance, I get that I can reject the Null Hypothesis of $\beta_{6} = 0$.
What's the intuition? (and maybe the mathematics behind)
How can it be that one test contradicts the other?
Thanks!