I'm wondering about the effect of true correlations among random effects on the standard error of my fixed effects in lme4::lmer
models in R.
My assumption is that if there are true correlations--as indicated by a significant improvement in model fit when the correlation parameters are added to the model--then the inclusion of these parameters should improve the precision of the estimates somewhere else in the model. In particular, I would expect the standard error of the fixed effects to be smaller in the model in which the correlation parameters are contributing to the model fit.
However, a number of people have pointed out that the inclusion of correlation parameters do not improve the precision of the estimates of fixed effects even when they "improve" model fits:
- In a multi-level model, what are the practical implications of estimating versus not-estimating random effect correlation parameters?
- I've conducted simulations of my own to the same end and presented them at a recent R user group meeting (Part 3: http://github.com/pedmiston/visualizing-lmer)
- Shravan Vasishth has a few blog posts on a similar question, e.g., http://vasishth-statistics.blogspot.com/2014/11/should-we-fit-maximal-linear-mixed.html
I'm going to push the dialog even farther and challenge someone to demonstrate a situation in which including random effect correlation parameters does anything other than add complexity to the model.
My ignorance might be due to an over-interpretation of fixed effects as the best indicator of "average behavior", so I am interested to see the conditions under which random correlation parameters are useful to people who use these models to make inferences (as opposed to simply observing a correlation in the sample).
Thanks for your help.