My models look like:
lme1 = lme(y~X+Y+V, random=~1|Subject, data=mydata, method ="ML")
lme2 = lme(y~X+Y+V2+V3, random=~1|Subject, data=mydata, method ="ML")
lme3 = lme(y~X+Y+V4, random=~1|Subject, data=mydata, method ="ML")
where X and Y are factors, but V, V2, V3,and V4 are continuous variables (modeled as covariates). I am using Method ="ML"
in the hope that I could compare the likelihood values across the models.
My research question has to do with whether V4 (in lme3) was a better predictor than V2 and V3 together, V2+V3 was better than V, etc. What goodness of fits measure is valid here? Can I use AIC values to compare models of different sets of parameters?
I've also found some references on computing $R^2$ for mixed models. In particular, I am interested in the likelihood ratio test $R^2$ (Magee, 1990) which computes a $R^2$ by comparing each of these models to the null model. Using this method, I'd be comparing all three of my models to the same null model with just y~1
. Is it then a valid approach to compare the $R^2$s generated?
I am not a statistician but I would like to use a valid (at least justifiable) measure for my analysis. Any feedback would be greatly appreciated.