0

I am new to R and lmer. We have a project examining the effects of mimicking others’ voice (mimicry vs non-mimicry, a between-subjects fixed effect) , modality (reading vs listening, a within-subjects effect), valence (positive vs negative; a within-subjects effect), and category (personality vs appearance, a within-subjects effect ) on the imitator’s judgement of some comments (Pleasantness, continuous, 1-7). We constructed a theoretically full mixed-effects model and wanted to decide the best model. Based on the recommendations of a paper, we tried to use the mixed () function from the afex package to conduct a likelihood-ratio test. The following is the code and summary of the model estimation. Now we are not very sure about how to interpret the result. The summary showed that in Row “10” and “11”, the p values are significant, but in Row “8” and “9”, the p values are not significant. In this case, how can we decide what terms should a best model include? Do we simply include those terms whose p values are significant and drop those whose are not?

Your help and advice will be much appreciated!

Fitting 16 (g)lmer() models:
[................]
Mixed Model Anova Table (Type 3 tests, LRT-method)

Model: Pleasantness ~ mimicry * modality * valence * category + (1 | 
Model:     Subject)
Data: df
Df full model: 18
                              Effect df       Chisq p.value
1                            mimicry  1      6.44 *    .011
2                           modality  1   19.00 ***   <.001
3                            valence  1 4493.93 ***   <.001
4                           category  1     7.86 **    .005
5                   mimicry:modality  1      3.74 +    .053
6                    mimicry:valence  1        0.37    .542
7                   modality:valence  1      4.23 *    .040
8                   mimicry:category  1        2.22    .136
9                  modality:category  1        0.68    .409
10                  valence:category  1      5.53 *    .019
11          mimicry:modality:valence  1      4.12 *    .042
12         mimicry:modality:category  1        0.09    .770
13          mimicry:valence:category  1        0.53    .465
14         modality:valence:category  1        0.24    .622
15 mimicry:modality:valence:category  1        0.63    .428
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘+’ 0.1 ‘ ’ 1
  • 1
    Welcome to the site. You say the outcome is continuous from 1 to 7, do you perhaps mean it is measured on a Likert scale? Also, selecting a model by dropping insignificant terms will give you a model that fits the *sample* well, but generalizes very poorly to the population of interest. See [here](https://stats.stackexchange.com/q/20836/176202). – Frans Rodenburg Sep 20 '21 at 05:11
  • Thanks for the reply. Yes, the outcome is measured on a Likert scale, ranging from 1 to 7. Also thanks for the insight about the difference between sample and population. My main concern is how to interpret the result returned by the mixed ()function and based on that select a best model that answer my research question. – Cai Rendong Sep 20 '21 at 05:13
  • I understand, but this approach will not give you a 'best' model, or even something close to it. You should start with a model that seems most likely from a theoretical point of view. If there are a handful you cannot choose between, *then* you could use a test to choose which you want. For starters, I would get rid of the higher order interactions. Can you even meaningfully interpret the second (e.g. `mimicry:modality:valence`), let alone third order interaction? – Frans Rodenburg Sep 20 '21 at 05:30
  • Thank you so much, Frans. If my understanding is correct, you mean I should compare some limited theoretically meaningful models manually, rather than let some algorithm do that for me. That's my original approach anyway. Btw, mimicry:modality:valence is still theoretically interpretable. However, the term "modality: category " is less theoretically meaningful and is not of main interest of the study either. Is it valid to drop the term "modality: category " on this ground? Thanks again@FransRodenburg – Cai Rendong Sep 20 '21 at 07:37
  • Yes, that would be a valid reason to exclude it. If you do so, you should also exclude all interactions involving the term you removed (e.g. `mimicry:modality:valence:category`), sor that your model does not violate the principle of marginality. – Frans Rodenburg Sep 20 '21 at 11:43
  • @FransRodenburg many thanks for the advice. Your help means huge for a new comer to this site. – Cai Rendong Sep 20 '21 at 14:26

0 Answers0