the $R^2$ is the amount of variation of the dependend variable $Y$ explained by linear regression. It can take values between 0 and 1.
If I use the adjusted $R^2$ instead, what is the interpretation of this value?
the $R^2$ is the amount of variation of the dependend variable $Y$ explained by linear regression. It can take values between 0 and 1.
If I use the adjusted $R^2$ instead, what is the interpretation of this value?
I think there're many existing posts covering it. The wikipedia has a nice section on it, so I think you should take a look.
When you fit a regression model, you will get a R2 statistic. We all want a model with high R2, but we don't want to overfit our model. Our job is to create a model with variables that can explain the variability significantly. We don't want to pour all variables into the data-set.
Unfortunately, we can always add more variables into the model to achieve a higher R2, regardless of whether those variables are correlated with the dependent variable.
Adjusted R2 is an attempt to take care of it. It can decrease if a new variable isn't useful to explain the variability.
EDITED
Adjusted R2 has no direct and simple interpretation, it is a metric to compare two different models. You can think of it like AIC or BIC. It is a measure of fit.
The interpretation of $1-R^2_{Adj}$ is the same as that of any other regularized loss function - penalizing a complicated model.
In particular this is similar to $L_0$ norm which also penalizes number of parameters but is not as popular as $ L_1$ and $L_2$ regularization because of their nice properties. See also their logistic counterparts, Elastic Net and Lasso regression.