4

MuMIn::r.squaredGLMM and piecewiseSEM::sem.model.fits should be preforming the same calculations. They are implementing Schielzeth and Nakagawa's R2 for generalized linear mixed effects models. However, I keep getting different results between the two. Does anyone know why? Which is more accurate? Thanks for any input you may have.

Here is an example

library(lme4)
data("cbpp")
mod <- glmer(incidence / size ~ period + (1 | herd), weights = size,
         family = binomial, data = cbpp)
library(piecewiseSEM)
sem.model.fits(mod)#R2m:0.09, R2c:0.19 
library(MuMIn)
r.squaredGLMM(mod)#R2m:0.11, R2c:0.11

Documentation for the two functions

https://rdrr.io/cran/piecewiseSEM/man/sem.model.fits.html https://www.rdocumentation.org/packages/MuMIn/versions/1.40.0/topics/r.squaredGLMM

Jeremy Miles
  • 13,917
  • 6
  • 30
  • 64
Kevin
  • 151
  • 6

1 Answers1

2

I suspect it is because you are comparing apples and oranges, i.e. sem.model.fits above is calling for a different statistic to r.squaredGLMM above.

Nakagawa, Johnson and Schielzeth's R2 for binomial models, as outlined in their 2017 paper The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded, has two ways of calculating the variance for the binomial distribution: theoretical variance, which is a fixed value for any binomial model, and observation-level variance, from the data you are actually modelling.

SEM.model.fits has been superceded by rsquared in the package piecewise SEMs. rsquared requires you to choose between theoretical variance (method = "theoretical") and observation-level variance (method = "delta"). The old function SEM.model.fits gives only the theoretical variance. r.squaredGLMM from the most recent version (as at 19/11/2019) of package MuMIn returns both the "theoretical" and the "delta" versions at once.

I have not run your code to be sure, but I suspect you are comparing "theoretical" from SEM.model.fits and "delta" from r.squaredGLMM.

I prefer r.squaredGLMM as it gives results for both types of variance at once, and also because rsquared doesn't work (for me) when I try to use (method = "delta") for a binomial proportional response variable that uses cbind to create the success/fail vector.

YJW
  • 21
  • 2
  • At the time I wrote this, theoretical and delta methods were not implemented. I emailed both authors and Dr. Lefcheck (peicewiseSEM) responded. He thought differences might be due to how distribution-specific error variance is computed. Right now, the 'theoretical' method gives the same result (R2m:0.09, R2c:0.19). Like you said, the delta method in rsquared gives an error. Previously r.squaredGLMM gave .11 for both R2m and R2c, which it no longer gives for either delta or theoretical R2m and R2c. So I'm not sure what was going on there, but it appears to be behaving better now. – Kevin Nov 20 '19 at 15:28