How can we use this as a basis to decide the best regression fit model? Not many question posts included the concept of Adjusted R-squared for understanding.
-
Possible duplicate of [What's the difference between multiple R and R squared?](http://stats.stackexchange.com/questions/90793/whats-the-difference-between-multiple-r-and-r-squared) – John Oct 20 '16 at 05:09
-
4@John the previous question didn't relate to _adjusted_ R squared. – Gordon Smyth Oct 20 '16 at 05:32
-
Yeah its not a duplicate, I didnt find the explanation for adjusted R squared – Rohit Venkat Gandhi Mendadhala Oct 20 '16 at 06:32
1 Answers
I wont go into the real maths of it (as I don't understand it myself), but I can explain it in more general terms.
Multiple R squared is simply a measure of Rsquared for models that have multiple predictor variables. Therefore it measures the amount of variation in the response variable that can be explained by the predictor variables. The fundamental point is that when you add predictors to your model, the multiple Rsquared will always increase, as a predictor will always explain some portion of the variance.
Adjusted Rsquared controls against this increase, and adds penalties for the number of predictors in the model. Therefore it shows a balance between the most parsimonious model, and the best fitting model. Generally, if you have a large difference between your multiple and your adjusted Rsquared that indicates you may have overfit your model.
Hope this helps. Hopefully someone may come along and explain this more in depth.

- 781
- 6
- 12
-
2The Wikipedia article on Coefficients of determination covers $r^2$, $R^2$ as well as the adjusted R squared: $\bar{R}^2$ https://en.wikipedia.org/wiki/Coefficient_of_determination#Adjusted_R2 – Beyer Oct 20 '16 at 07:20
-
Why should both be different when you have only one variable (beyond the intercept)? Does it mean both will never be the same? – Rodrigo Jan 17 '21 at 20:57