I have an experiment that I'll try to abstract here. Imagine I toss three white stones in front of you and ask you to make a judgment about their position. I record a variety of properties of the stones and your response. I do this over a number of subjects. I generate two models. One is that the nearest stone to you predicts your response, and the other is that the geometric center of the stones predicts your response. So, using lmer in R I could write.
mNear <- lmer(resp ~ nearest + (1|subject), REML = FALSE)
mCenter <- lmer(resp ~ center + (1|subject), REML = FALSE)
UPDATE AND CHANGE - more direct version that incorporates several helpful comments
I could try
anova(mNear, mCenter)
Which is incorrect, of course, because they're not nested and I can't really compare them that way. I was expecting anova.mer to throw an error but it didn't. But the possible nesting that I could try here isn't natural and still leaves me with somewhat less analytical statements. When models are nested naturally (e.g. quadratic on linear) the test is only one way. But in this case what would it mean to have asymmetric findings?
For example, I could make a model three:
mBoth <- lmer(resp ~ center + nearest + (1|subject), REML = FALSE)
Then I can anova.
anova(mCenter, mBoth)
anova(mNearest, mBoth)
This is fair to do and now I find that the center adds to the nearest effect (the second command) but BIC actually goes up when nearest is added to center (correction for the lower parsimony). This confirms what was suspected.
But is finding this sufficient? And is this fair when center and nearest are so highly correlated?
Is there a better way to analytically compare the models when it's not about adding and subtracting explanatory variables (degrees of freedom)?