0

I have run LMM models with different reference categories and this yield different results:

> summary(lmer3)
Linear mixed model fit by maximum likelihood ['merModLmerTest']
Formula: v000001 ~ (1 | item) + (1 + color | speaker) + Language * color *      sex 
   Data: data1.frame 

      AIC       BIC    logLik  deviance 
16279.975 16377.355 -8119.988 16239.975 

Random effects:
 Groups   Name        Variance  Std.Dev.  Corr       
 speaker  (Intercept) 8.904e+05 9.436e+02            
          colorblue   1.821e+05 4.267e+02 -0.35      
          colorred    3.428e+05 5.855e+02 -0.44  1.00
 item     (Intercept) 9.502e-06 3.083e-03            
 Residual             1.067e+06 1.033e+03            
Number of obs: 962, groups: speaker, 53; item, 10

Fixed effects:
                                  Estimate Std. Error       df t value Pr(>|t|)
(Intercept)                       10664.67     318.69    38.45  33.464   <2e-16
Languagel2_like                     391.48     421.40    42.13   0.917   0.3642
colorblue                          -179.31     211.02    44.50  -0.850   0.4000
colorred                            116.96     241.44    36.27   0.484   0.6310
sexmale                            -168.01     450.11    38.26  -0.373   0.7110
Languagel2_like:colorblue           758.22     301.01    54.20   2.519   0.0147
Languagel2_like:colorred            463.37     344.01    45.73   1.344   0.1857
Languagel2_like:sexmale            -811.49     607.85    43.49  -1.326   0.1917
colorblue:sexmale                   342.76     294.97    42.57   1.162   0.2517
colorred:sexmale                     13.25     337.44    34.81   0.039   0.9689
Languagel2_like:colorblue:sexmale  -721.37     438.78    54.19  -1.644   0.1059
Languagel2_like:colorred:sexmale   -605.76     497.75    45.29  -1.216   0.2304

(Intercept)                       ***
Languagel2_like                      
colorblue                            
colorred                            
sexmale                              
Languagel2_like:colorblue         *  
Languagel2_like:colorred            
Languagel2_like:sexmale              
colorblue:sexmale                    
colorred:sexmale                     
Languagel2_like:colorblue:sexmale    
Languagel2_like:colorred:sexmale     
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

And this is the second model:

> summary(lmer43)
Linear mixed model fit by maximum likelihood ['merModLmerTest']
Formula: v000001 ~ (1 | item) + (1 + color3 | speaker) + Language * color3 *      sex 
   Data: data1.frame 

      AIC       BIC    logLik  deviance 
16279.975 16377.355 -8119.988 16239.975 

Random effects:
 Groups   Name        Variance  Std.Dev.  Corr       
 speaker  (Intercept) 7.945e+05 8.913e+02            
          color3white 1.821e+05 4.268e+02 -0.11      
          color3red   2.761e+04 1.661e+02 -0.24 -0.94
 item     (Intercept) 4.961e-06 2.227e-03            
 Residual             1.067e+06 1.033e+03            
Number of obs: 962, groups: speaker, 53; item, 10

Fixed effects:
                                   Estimate Std. Error       df t value
(Intercept)                        10485.36     305.33    39.57  34.341
Languagel2_like                     1149.70     399.61    43.91   2.871
color3white                          179.31     211.03    44.50   0.850
color3red                            296.27     167.08   125.59   1.773
sexmale                              174.75     430.05    38.94   0.406
Languagel2_like:color3white         -758.22     301.01    54.10  -2.519
Languagel2_like:color3red           -294.85     244.46   159.09  -1.206
Languagel2_like:sexmale            -1532.85     577.70    44.74  -2.648
color3white:sexmale                 -342.76     294.98    42.57  -1.162
color3red:sexmale                   -329.51     228.57   113.99  -1.442
Languagel2_like:color3white:sexmale  721.36     438.78    54.10   1.644
Languagel2_like:color3red:sexmale    115.61     351.65   162.98   0.329
                                   Pr(>|t|)    
(Intercept)                         < 2e-16 ***
Languagel2_like                     0.00627 ** 
color3white                         0.40004    
color3red                           0.07862 .  
sexmale                             0.68671    
Languagel2_like:color3white         0.01477 *  
Languagel2_like:color3red           0.22953    
Languagel2_like:sexmale             0.01114 *  
color3white:sexmale                 0.25171    
color3red:sexmale                   0.15215    
Languagel2_like:color3white:sexmale 0.10602    
Languagel2_like:color3red:sexmale   0.74275    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Is it possible to report results from two models? (I know that very little of people would do this but these two models give different pictures). What should I do? Which one should I believe?

user3288202
  • 1,165
  • 5
  • 16
  • 25
  • 1
    These are the *same* model, restated. Can you please state what the two different pictures are that you're inferring from the two different views of the same model? (If I understand you correctly, your question has to do with reparameterizing models in general, and is not at all specific to LMMs.) – Ben Bolker Mar 20 '14 at 21:11
  • For example, suppose the colours (blue, white, red) have associated mean estimates of 0, 1, 2 with a standard error of about 0.75. If blue is the baseline you'll get (intercept=0, 'white'=1, 'red'=2); the blue-vs-red contrast will be significant at p<0.05. If white is the baseline you'll get (intercept=1,'blue'=-1, 'red'=1), and neither of the individual contrasts will be significant. In the presence of interactions the argument gets a little more complicated but it's basically the same idea. – Ben Bolker Mar 20 '14 at 21:14
  • Hi Ben Bolker, yes my question is about reparameterizing models as you said. Comparing these two models, the first one tells me there is no difference in the first parameter whereas the second one shows differece in this parameter. So for me, I should also report both models. What do you think? – user3288202 Mar 21 '14 at 06:05
  • I think you don't understand the default parameterization of linear models in R. You might want to read about/use treatment contrasts: in particular, the `mixed` "helper" package for `lme4` does so by default. – Ben Bolker Mar 21 '14 at 12:17
  • Umm Ben Bolker, so should I choose to report only one model, i.e. choose results from only one model regardless of what the results of the second model would be? – user3288202 Mar 21 '14 at 12:55
  • You should work first on understanding what the models are telling you -- you shouldn't present a model you don't understand. Once you understand how the models are parameterized, you'll see that the results are *not* qualitatively different (what's different is the **meaning** of the first parameter) – Ben Bolker Mar 21 '14 at 13:20
  • Thanks Ben, but this is very confusing for me. It's weird seeing that the first parameter is not significant in the first model, but the second model. I understand that they are not qualitatively different, but their significant parameter brings two different results, i.e. the first one tells me that the LanguageL2 is not important whereas the second one tells me it is. – user3288202 Mar 21 '14 at 13:31
  • Ok now I am clear that difference of p value in models with different baseline is possible. So I will just report what the statistics show. Thank you very much Ben Bolker. – user3288202 Mar 21 '14 at 16:15
  • 2
    I really don't have time to give a more thorough answer right now, I hope someone else will, or that you can find a useful discussion of contrasts and interactions online. I will just repeat that **the meaning of the LanguageL2 parameter DIFFERS** between the first and second models; you can't just say "one model says LanguageL2 is not important, the other says it is" (you should also not be interpreting p-values as denoting "importance"). – Ben Bolker Mar 21 '14 at 17:14
  • Perhaps http://stats.stackexchange.com/questions/33709/interpreting-the-regression-output-from-a-mixed-model-when-interactions-between would be useful – Ben Bolker Mar 21 '14 at 17:23
  • Or http://stats.stackexchange.com/questions/33516/why-does-the-model-change-when-using-relevel/33517#33517 – Ben Bolker Mar 21 '14 at 19:40
  • Right. I think it makes more sense to me now. A bunch of thanks for you, Ben Bolker. – user3288202 Mar 21 '14 at 20:37

0 Answers0