2

The variance of the $j$th element of the OLS estimator is given by

$$\operatorname{Var}\left(\hat{\beta}_{j}\right)=\sigma^{2}\left(X_{j}^{T} M_{-j} X_{j}\right)^{-1}$$

where $X_j$ is the column of regressors associated to the $j$ variable, and $M_{-j}$ is the maker of residuals (the projection off) of the space generated by all columns of the matrix $X$ besides the $j$th one.

Show that the variance of $\hat{\beta}$ can also be written as

$$\operatorname{Var}\left(\hat{\beta}_{j}\right)=\frac{\sigma^{2}}{(n-1) \operatorname{Var}\left(X_{j}\right)}\left(\frac{1}{1-R_{X_{j} \mid X_{-j}}^{2}}\right)$$

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
Qi Yao
  • 21
  • 1

1 Answers1

1

Recall that (see e.g. here), in general, $$ R^2=1-\frac{\hat{u}'\hat{u}}{\tilde{y}'\tilde{y}}. $$ Here, $\hat u$ denotes the vector of residuals and $\tilde y$ the vector of demeaned observations on the dependent variable.

In matrix notation, with dependent variable $X_j$, the numerator is just $$ X_{j}^{T} M_{-j} X_{j} $$ and the denominator is sum of the squares of the demeaned observations of the $X_j$, i.e., $n-1$ its sample variance (which I prefer to denote by $s^2_{X_j}$ to make clear it is not the population variance of $X_j$).

Thus, $$ R_{X_{j} \mid X_{-j}}^{2}=1-\frac{X_{j}^{T} M_{-j} X_{j}}{(n-1)s^2_{X_j}}. $$ Hence, $$ \frac{1}{(n-1)s^2_{X_j}}\frac{1}{1-R_{X_{j} \mid X_{-j}}^{2}}=(X_{j}^{T} M_{-j} X_{j})^{-1} $$

Christoph Hanck
  • 25,948
  • 3
  • 57
  • 106