0

Say I have these two models:

$y = \beta_0 +\beta_1x_1 + u$

$y = \beta_0 +\beta_1x_1 +\beta_2x_2 + u$

and the $p$ value for $H_0:\beta_1 = 0 $ with $\alpha = 10\%$ for both is less than 0.001, but the $t$ statistic in model 1 is less than the $t$ statistic in model 2.

Does this suggest that model 1 has better evidence for rejecting $H_0:\beta_1 = 0 $ ?

My suspicion is that it doesn't because the whole point of the $p$ value is to have the least level of significance at which at which the null would be rejected, and therefore the concept of "less" or "more" evidence for rejecting becomes meaningless.

Am I totally off or somewhat correct?

Thanks in advance for any help!

Jona
  • 223
  • 1
  • 13

1 Answers1

0
  1. p-value actually has nothing to do with $\alpha$, it depends only on the value of t-statistic and its degree of freedom. Your decision on keeping a predictor or not is based purely on comparison between p-value and $\alpha$. If you already set $\alpha=0.1$, then you should reject the null hypothesis ($H_0 : \beta_1 = 0$) as long as p-value < $\alpha$, no matter what the value of the t-statistic is. In your model, p-value = 0.001, so you should reject $H_0$ for any $\alpha>0.001$. In practice, we usually choose $\alpha=0.05$ or $\alpha=0.01$, so p-value=0.001 can be regarded as a strong evidence against $H_0$.
  2. In both models, you test $H_0:\beta_1=0$. If we denote $H_{01}: \beta_1=0$ in model1, and $H_{02}: \beta_1=0$ in model2, then you will find $H_{01}$ and $H_{02}$ are not identical, because they are for different models. It doesn't make too much sense to compare them.
  3. If you are testing the same null hypothesis for the same model, then you may say you are "more confident in rejecting $H_0$" for a "more extreme" t-statistic. "More extreme" means "larger absolute value", not "larger value" of the t-statistic.
  4. Model2 has more predictors than Model1, which might cause multicollinearity. Suppose $\beta_1$ is significant in Model1. If $x_2$ in model2 is highly correlated to $x_1$, then your model suffers multicollinearity. In this case, it might happen that t-test has large p-value for $\beta_1$ in Model2, and you don't reject $H_0$. However, you can't say model1 has more evidence against $H_0$ than Model2. In model2, since $x_1$ and $x_2$ are highly correlated, the regression is essentially still between $y$ and $x_1$. The increase in p-value is not because $x_1$ doesn't contribute to $y$, but because the model you use is not proper.
JellicleCat
  • 424
  • 3
  • 6
  • Thank you for all the good points - very helpful. Are you saying that it **is** the case that model 1 provides better evidence towards rejecting the null, just based on the t-statistics? Is this what you meant in your number 3? Thanks again! – Jona Mar 01 '15 at 02:54
  • @Jona No. As I said in #2, the two tests $H_{01}$ and $H_{02}$ are not comparable, because they are for different models. In #3, I mean if you are testing the **same** hypothesis for the **same** model, then smaller p-value lend you more confidence in rejecting the null hypothesis. For example, if you fit Model 1 twice using different data sets, you have two p-values $p_1$ and $p_2$. If $p_1$ < $p_2$, you can say dataset 1 gives you more confidence in rejecting $H_{01}$ than dataset 2. – JellicleCat Mar 04 '15 at 07:15