0

I am new to mixed effect model. I am running a linear mixed effect model on sholl analysis data using lme4. I am using the following formula to look for any changes in intersections with different experimental group (Exp).

model2 = lmer(Intersections~ 1+ Exp + Radius + Exp*Radius + (1|Exp), data = data_dmNC)

output:

> summary(model2)
Linear mixed model fit by REML. t-tests use Satterthwaite's method [lmerModLmerTest]
Formula: Intersections ~ 1 + Exp + Radius + Exp * Radius + (1 | Exp)
   Data: data_dmNC

REML criterion at convergence: 10301.5

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-1.8671 -0.6914 -0.2018  0.4566  5.8317 

Random effects:
 Groups   Name        Variance Std.Dev.
 Exp      (Intercept) 0.260    0.5099  
 Residual             4.805    2.1920  
Number of obs: 2330, groups:  Exp, 3

Fixed effects:
                         Estimate Std. Error         df t value Pr(>|t|)    
(Intercept)             5.344e+00  5.434e-01  4.102e-09   9.835  1.00000    
ExpTrained             -2.003e-01  7.608e-01  3.940e-09  -0.263  1.00000    
ExpUndertrained        -8.954e-01  7.590e-01  3.903e-09  -1.180  1.00000    
Radius                 -5.863e-02  4.715e-03  2.324e+03 -12.434  < 2e-16 ***
ExpTrained:Radius       2.359e-02  5.562e-03  2.324e+03   4.241 2.31e-05 ***
ExpUndertrained:Radius  1.567e-02  5.787e-03  2.324e+03   2.708  0.00682 ** 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Correlation of Fixed Effects:
            (Intr) ExpTrn ExpUnd Radius ExpT:R
ExpTrained  -0.714                            
ExpUndrtrnd -0.716  0.511                     
Radius      -0.304  0.217  0.218              
ExpTrnd:Rds  0.258 -0.276 -0.185 -0.848       
ExpUndrtr:R  0.248 -0.177 -0.273 -0.815  0.691

My problem is I have 3 factors in Exp group (Trained, No Association and Undertrained) but the summary only shows output for Trained and undertrained. I am not able to figure out why.

Secondly, I also want to compare differences Between Trained, No association and Undertrained groups the way we do it in ANOVA and post hoc. Is this possible in LME? If yes, then can you suggest what changes should i make in my code?

1 Answers1

0

Lmer (and all other linear regression packages) transform factors into dummy variables. So your factor Exp becomes the two dummy variables ExpTrained and ExpUndertrained with "No Association" as your reference category.

Dummy Variable Trained Untrained No Association
ExpTrained 1 0 0
ExpUntrained 0 1 0

So "No Association" is coded as 0 in both ExpTrained and ExpUntrained.

The difference between Trained and No Association is in the slope of ExpTrained (-0.2003) provided that the radius is 0. The difference between Untrained and No association is in the slope of ExpUntrained (-0.8954) provided that the radius is 0. Only the difference between Trained and Untrained is missing.

If you prefer an Anova output you can simply:

anova(model2)
  • Thank you, now I get it. So if I want to see the difference between trained and undertrained I will have to run the model separately having just these two categories... – Pooja Parishar Sep 08 '21 at 14:49
  • Yes. Or you can create your own dummy variable with ExpTrained or ExpUntrained as your reference category. – Davide Capponi Sep 13 '21 at 12:39
  • or probably better: https://stats.stackexchange.com/questions/237512/how-to-perform-post-hoc-test-on-lmer-model – Davide Capponi Sep 19 '21 at 11:57