I have a question regarding model selection based on the AIC criterion. I have 5 predictor variables that I include in my models (no interactions are included) and create all possible model combinations. After, I calculate the AIC values for each model and thought I should select the model with the lowest AIC value, which already penalizes for the number of parameters included. However, when I test between the model with the lowest AIC value (that contains four parameters) and the second lowest AIC value (that contains three parameteres) using the anova() function in R, it seems that the preferred model is the one with the second lowest AIC value (three parameters) because it seems that the more complex model is not explaining significantly greater variance than the less complex model, even if the former has the lowest AIC value. I'm confused as to what model should actually be selected. Can someone clarify this discrepancy for me?
Asked
Active
Viewed 31 times
1
-
Selected for what? What problem do you want to use your model to solve? – Dave Feb 24 '21 at 14:57
-
@Dave I am running regression models trying to explain variance in behavioral performance scores based on age, IQ, education, and two brain measures. So I want to see which model best explains the performance scores. – Gigi123 Feb 24 '21 at 16:07