2

I note that some stepwise backwards elimination methods use AIC to make the decision about which variables to eliminate, and others use the F-statistic. Why would I use one over the other, and is there a justification for this choice? All the literature I’ve found just highlights that there are two approaches, and some say that “AIC is better than the F-statistics” but does not offer any explanation.

I am aware that stepwise regression is frowned upon in general, so hence this does not need to be a discussion on that aspect as there are a lot of posts on those aspects both here and in the research literature.

SimonsSchus
  • 719
  • 5
  • 13
  • See [Equivalence of AIC and p-values in model selection](http://stats.stackexchange.com/q/89214/17230). I suspect any rigid preference for AIC - i.e. a p-value cut-off of 0.157 rather than the 0.05 you night be thinking of - comes from the asymptotic equivalence of selecting by AIC to selecting by leave-one-out cross validation, which seems relevant when the goal of the model selection is to improve predictive performance. Nevertheless, validation of the *whole* selection + fitting procedure often shows disappointing results & I'll link yet again to ... – Scortchi - Reinstate Monica Mar 23 '16 at 14:24
  • ... [Algorithms for automatic model selection](http://stats.stackexchange.com/q/20836/17230) for the explanation (if not for your benefit, for that of other readers). – Scortchi - Reinstate Monica Mar 23 '16 at 14:25

0 Answers0