4

I apologise for the trivial question, but I have got myself confused about how heteroskedasticity affects OLS regression and would be very thankful for your help.

In standard OLS, homoskedasticity is not a requirement of unbiasedness. Hence, under heteroskedasticity, the coefficient estimates will still be unbiased. The standard errors will however be wrong, which makes the t-test invalid.

But what about other metrics like F-test, R squared and adjusted R squared?

I am thinking that if the coefficients are consistent, then the estimate of the regression residuals ($y-\beta_0 - \beta_! *x_1 - \beta_2 *x_2 - ... - \beta_n * x_n = u$) should also be unbiased. But in that case, nothing really changes with R squared or the F-test as these are based on SSR?

However, I know that for a single restriction $F = t^2$, and this would indicate that F should also be inconsistent under heteroskedasticity. How then does all of this go together?

Jhonny
  • 175
  • 1
  • 4
  • R-squared is a goodness of fit measure. It is not really used for inference. Intuitively, as heteroskedasticity increases, the R-squared of a given model will decrease. This should be fairly clear from the formula. – lmo Jun 09 '19 at 11:33
  • 1
    "under heteroskedasticity, the coefficient estimates will still be unbiased". Intuitively it follows under this condition that residuals will also be unbiased, and then F-test's p-value (which is similar to R^squared p-value) should result in unbiased estimate. https://stats.stackexchange.com/questions/111602/does-r-squared-have-a-p-value – Alexey Burnakov Sep 09 '19 at 13:43

0 Answers0