3

I am studying hypothesis testing for the regression coefficient, it is given that

The hypotheses for testing the significance of any individual regression coefficient, such as $\beta_{j},$ are $$ H_{0}: \beta_{j}=0, \quad H_{1}: \beta_{j} \neq 0 $$ If $H_{0}: \beta_{j}=0$ is not rejected, then this indicates that the regressor $x_{j}$ can be deleted from the model. The test statistic for this hypothesis is $t_{0}=\frac{\hat{\beta}_{j}}{\sqrt{\hat{\sigma}^{2} C_{j j}}}=\frac{\hat{\beta}_{j}}{\operatorname{se}\left(\hat{\beta}_{j}\right)}$ where $C_{j j}$ is the diagonal element of $\left(\mathbf{X}^{\prime} \mathbf{X}\right)^{-1}$ corresponding to $\hat{\beta}_{j} .$ The null hypothesis $H_{0}: \beta_{j}=0$ is rejected if $\left|t_{0}\right|>t_{\alpha / 2, n-k-1}$.

1st question: It is just given that this $t$ is test statistics, but how to show/ prove that this hypothesis can be tested using the given $t$ - statistic.

Given the linear model $G: \mathbf{Y}=\mathbf{X} \beta+\varepsilon,$ where $\mathbf{X}$ is $n \times p$ of rank $p$ and $\varepsilon \sim N_{n}\left(0, \sigma^{2} I_{n}\right),$ we wish to test the hypothesis $H: \mathbf{A} \beta=c,$ where $\mathbf{A}$ is $q \times p$ of rank $q$.
The likelihood ratio test of $H$ is given by $$ \Lambda=\frac{L\left(\hat{\beta}_{H}, \hat{\sigma}_{H}^{2}\right)}{L\left(\hat{\beta}, \hat{\sigma}^{2}\right)}=\left(\frac{\hat{\sigma}^{2}}{\hat{\sigma}_{H}^{2}}\right)^{n / 2} $$ and we can define $F$ statistic based on this ratio as $F=\frac{n-p}{q}\left(\Lambda^{-2 / n}-1\right)$ has an $F_{q, n-p}$ distribution when $H$ is true. We then reject $H$ when $F$ is too large.

2nd question: Can we prove that the likelihood ratio test is equivalent to the $t$-test in 1st question.

IamKnull
  • 130
  • 7

1 Answers1

-1

For linear regression a t-test and F-test are the same: Difference between t-test and ANOVA in linear regression

This is also true for your situation. You can transform the $X$ and $\beta$ in your equation $Y=X\beta +\epsilon$ such that $\mathbf{A}\beta$ is one of your parameters (*). And you can shift the Y-variable (subtract $c\cdot \mathbf{A}\beta$ from it) such that the test becomes $\mathbf{A}\beta = 0$. And then you have the same situation as typical regression where a t-test and F-test are the same.

This is a bit intuive explanation. Maybe it still needs the algebra to show that it actually works.


(*) Example of that transformation: recently we had a question Confidence interval for the difference of two fitted values from a linear regression model The question was about the distribution/confidence interval of the difference in the expectation of two points $\hat{y}_1$ and $\hat{y}_2$ (which can be expressed as some linear sum of the $\beta$). By a transformation of the regressors, this can be made directly equivalent to the distribution of a parameter in the regression.

Sextus Empiricus
  • 43,080
  • 1
  • 72
  • 161
  • 1
    @IamKnull did you downvote this? Why not give a comment about it as well? I could improve the answer, but this blunt response is not very motivating. – Sextus Empiricus Oct 20 '20 at 09:31