0

I normally use r.squaredGLMM() (from MuMIn package) to extract marginal and conditional R-squared (or pseudo-R squared) values for my glmm models, however this does not work for nlme models.

For example, I am running this sort of nlme model:

model=nlme(wt~A*(1-exp(k*(t0-age))), #Function
    fixed=A+k+t0~1,#Fixed effects
    random=list(squirrel_id = pdDiag(A+t0+k~1)), #pdDiag specifies random effects are uncorrelated 
    data= growth_envt_F, # Input dataset
    start=c(A=253.6,k=.03348,t0=32.02158), #Specifies where to start 
    na.action=na.omit, #Omit any NA values 
    control=nlmeControl(maxIter=200, pnlsMaxIter=10, msMaxIter=100)) #Maximum number of iterations before determined divergent

Does anyone know a package that I can use to do this given r.squaredGLMM(model) doesn't work with nlme models?

Background: We are working in a situation where we need to compare multiple models to see how the perform relative to each other (without concern for parsimony). We are using these models to estimate growth for real data. The models all appear to do a poor job of estimating growth rate for the data, but this is visually subjective and we have been asked by a reviewer to present a more objective estimate like a goodness of fit measure.

  • Why are you interested in $R^2$ for a nonlinear model? $R^2$ does not have the same [interpretation in the nonlinear case](https://stats.stackexchange.com/questions/551915/interpreting-nonlinear-regression-r2) as in the linear case. – Dave Nov 29 '21 at 19:08
  • @Dave True! I added some edits above to address your question. – Blundering Ecologist Nov 29 '21 at 19:26
  • Why would $R^2$ be a reasonable goodness of fit measure? – Dave Nov 29 '21 at 19:46
  • The reviewers specifically asked for R2 values for the models. I am not sure what would be more appropriate instead. Might you have a suggestion of a more appropriate measure for goodness of fit for nlme models? – Blundering Ecologist Nov 29 '21 at 20:04
  • 1
    If you haven't considered it already, one alternative I've seen for model comparison in the nonlinear case is mean squared error or root mean squared error. It has the benefit of keeping the statistic in terms of the units of the dependent variable and can be calculated easily with `model.rmse = sqrt(mean(model$residuals^2))` – gibson25 Nov 30 '21 at 07:25
  • @gibson25 I didn't know about this! In terms of the output, what does it mean as a goodness of fit measure? (Or can I even pitch it this way to the reviewers?) – Blundering Ecologist Dec 01 '21 at 15:38
  • 1
    It's generally a hard problem to say what counts as a good fit. Even when $R^2$ has its usual meaning, there are situations where $R^2=0.4$ could be quite good and situations where $R^2 = 0.9$ is not so good. On the other hand, a metric like (R)MSE gives a sense of the variance or the residuals, which gets us to think about the problem in its context, not think about $R^2$ like grades in school where $R^2=0.4$ is like an F that makes us sad and $R^2 = 0.9$ is like an A that makes us happy. // If the reviewers insist on goodness-of-fit metrics, it might to time to have a statistician co-author. – Dave Dec 01 '21 at 15:53
  • I agree with what Dave said. It may be hard to say whether a given R2 or RMSE is "good" by itself, but they can be great for model comparison. The most concise intuitive meaning of the RMSE is that it's a measure of the average error, with penalties for larger errors. In that sense, it measures model fit since a closer fit will result in lower errors generally. – gibson25 Dec 01 '21 at 18:48
  • @gibson25 Watch out for the difference between mean absolute error and RMSE. While both result in values in the original units, they are not equal, and your description of "measure of the average error" is in line with absolute error, not square error. – Dave Dec 01 '21 at 19:11
  • That makes sense - although at this stage, authorship changes are generally unheard of in my field and would likely get the MS bounced. Model comparison is really what I am trying to get at (we are comparing six different growth models). We visually showed that one model clearly describes growth better, but the reviewers pushed for a comparison with numbers (hence, they suggested R^2). But I'll see if RMSE will satisfy their requested changes! – Blundering Ecologist Dec 02 '21 at 17:58
  • @Dave To double check I am interpreting it correctly. An output for model 1 of 20 versus model 2 of 22 would mean that the root mean squared error for model 1 is lower (and therefore model 1 more closely fits the data)? – Blundering Ecologist Dec 02 '21 at 18:19
  • 1
    RMSE, MSE, and $R^2$ are equivalent from the standpoint of model comparisons (depending on some definitions). The reason people seem to like $R^2$ over (R)MSE is the idea that $R^2$ can be likened to grades in school where $R^=0.4$ is an $F$ and $R^2=0.9$ is an $A$. “I have a grade-$A$ model,” you might say with joy. However, it could be that $0.4$ is very good for one data set while $0.9$ is mediocre for another data set. I am skeptical of drawing an equivalence between $R^2$ and grades. – Dave Dec 02 '21 at 19:51
  • @Dave Right, that makes sense. I feel comfortable explaining the problematic thinking behind using R2, so thank you for helping there! For the RMSE though (to make sure I understand correctly) - when I get a RMSE value for each model, the lower the value the better the fit? – Blundering Ecologist Dec 02 '21 at 20:14
  • 1
    @BlunderingEcologist [Yes](https://stats.stackexchange.com/a/554334/247274) – Dave Dec 02 '21 at 20:32

0 Answers0