1

I kept seeing articles about the drawbacks of R-squared (and that's why we need to have adjusted R-squared).

One drawback is that: "Every time you add a predictor to a model, the R-squared increases, even if due to chance alone. It never decreases. Consequently, a model with more terms may appear to have a better fit simply because it has more terms." (link).

Mathematically, why does this happen? And mathematically, why adjusted R-squared can solve this problem?

RockTheStar
  • 11,277
  • 31
  • 63
  • 89
  • 1
    Check [this](http://thestatsgeek.com/2013/10/28/r-squared-and-adjusted-r-squared/) and [this](https://stats.stackexchange.com/questions/207717/why-does-r2-grow-when-more-predictor-variables-are-added-to-a-model). – Lucas Farias Feb 21 '19 at 00:52
  • 1
    For further analysis on the [problems with R-Squared](https://data.library.virginia.edu/is-r-squared-useless/) – Kunio Feb 21 '19 at 01:33
  • Perfect @LucasFarias Will them out. – RockTheStar Feb 21 '19 at 17:55
  • 1
    Also relevant: https://stats.stackexchange.com/questions/414349/is-my-model-any-good-based-on-the-diagnostic-metric-r2-auc-accuracy-e/414350#414350 – mkt Jun 25 '19 at 08:34

0 Answers0