1

The documentation of effects package says

"If asked, the effect function will compute effects for terms that have higher-order relatives in the model, averaging over those terms (which rarely makes sense)"

My question is why does it rarely make sense? What if we have a mixed model like y ~ f1*f2*f3 + (1|sub) and only the f1:f2 interaction is significant? Isn't it reasonable to ignore the 3-way interaction and look at the simpler (and significant) 2-way interaction(s)?

locus
  • 743
  • 4
  • 17
  • what will you do to ignore the 3-way interaction? – user158565 Nov 04 '18 at 01:52
  • Some related posts: https://stats.stackexchange.com/questions/27724/do-all-interactions-terms-need-their-individual-terms-in-regression-model, https://stats.stackexchange.com/questions/62921/how-to-interpret-the-significant-interaction-of-two-non-significant-main-predict – kjetil b halvorsen Nov 04 '18 at 12:37
  • @a_statistician. I mean I can just ignore the 3-way interaction by looking at 2-way interactions. The way I understand it, the `effects` package averages over higher order terms in the model when looking at lower-level interactions – locus Nov 04 '18 at 17:42
  • If I do this analysis, I will fit a new model without three-way interaction. Anyway, the first steps is model selection. Then work on the estimate and test on the specific effects. – user158565 Nov 04 '18 at 17:50
  • @a_statistician, but if you fit a model like `y ~ f1*f2*f3` and the 3-way interaction is not significant, which factor will you drop? Would you fit a new model for each excluded factor, `y~f1*f2`, `y~f1*f3` and `y~f2*f3`? – locus Nov 04 '18 at 18:05
  • 1
    Your current model has f1, f2, f3, f1*f2, f1*f3, f2*f3 and f1**f2,f3. The new model should have f1, f2, f3, f1*f2, f1*f3 and f2*f3. I do not know how to specify the new model in R, but I think there is a way to do it in R. – user158565 Nov 04 '18 at 18:11
  • Thanks @a_statistician that makes sense. But how would you deal with a situation in which you have 4 factors (f1,f2,f3,f4) and only the `f1:f4` and `f1:f2:f3` interactions were significant? The `f1:f4` interaction might still be interesting to examine even though it has higher-order terms. You cannot drop the 3-way interactions because of `f1:f2:f3`. So the question remains, why doesn't it make sense to look at `f1:f4`, which is a lower-level interaction in the model? – locus Nov 05 '18 at 23:15
  • 1
    What I will do is getting the satisfied model first. Before fitting model, set up the p-value level for excluding item from the model, for example, 0.10. Suppose there are 4 factors, fit a model with all of the interaction, including f1*f2*f3*f4. If f1*f2*f3*f4 can be excluded, fit a new model, check the third order interactions,..., until no more item can be excluded. Next, read the model carefully, and make the decision on what to estimate/test, ==> construct L matrix,==> perform the estimate/test. – user158565 Nov 06 '18 at 03:54
  • Thanks for your comment @a_statistician. Just to make sure I understand, let's say I set up the p-value at 0.10 and only the `f1:f2:f3` was <0.10 (the `f1:f2:f4`, `f1:f3:f4` and `f2:f3:f4` were all >0.10). Should I exclude _all_ 3-way interactions that were >0.10 from the model, like this `y ~ (main effects)+(2-way interactions)+f1:f2:f3` or should I still include all 3-way interactions because at least one 3-way interaction (`f1:f2:f3`) was <0.10? – locus Nov 06 '18 at 21:28
  • 1
    Need to be step by step. Given p> 010 for 4 way-interaction, fit a model just exclude 4 way interaction, then check 3 3-way-interactions, if largest p > 0.10, exclude it, fit another new model. Sometimes, the not-sig-terms will become sig. after you exclude others. – user158565 Nov 06 '18 at 21:33
  • I see. So you would exclude one by one. After excluding the 4-way interaction, I fit a new model `y ~ (main effects)+(2-way interactions) + (3-way interactions)`. If `f2:f3:f4` is larger than 0.10 I exclude it and fit a new model `y ~ (main effects)+(2-way interactions) + f1:f2:f3 + f1:f2:f4 + f1:f3:f4`. If `f1:f3:f4` is then larger than 0.10, I exclude it and fit a new model `y ~ (main effects)+(2-way interactions) + f1:f2:f3 + f1:f3:f4`, and so on, is that right? – locus Nov 06 '18 at 21:41
  • 1
    Yes. After the model is selected, then figure out what special things needing to be tested/estimated. – user158565 Nov 06 '18 at 23:52

1 Answers1

1

What is your research question? Is it to understand how f1 and f2 combine to affect y, regardless of the levels of f3? Or is it how f1, f2 and f3 combine to affect y? Ideally, the model you fit to the data should reflect your research question. So that would be where I would start.

If the latter, then an insignificant p-value for the 3-way interaction simply means you have no evidence in the data to support the contention that f1 and f2 combine to affect y differently for different levels of f3. It may be that the 3-way interaction is real but you don't have enough power to detect it. (Tests for interactions tend to have low power.) Or it may be that the interaction is negligible. If you can estimate the size of the 3-way interaction via a confidence interval, that might give you a better clue as to what is going on.

Isabella Ghement
  • 18,164
  • 2
  • 22
  • 46
  • Thanks for your answer Isabella. Imagine I'm interested in predicting consumer's satisfaction (`y`) from the type of product (`f1`), and quality of service (`f2`). I expect `y` to increase the better `f1` and `f2` are. Let's also assume I registered the type of store (`f3`). I don't have specific predictions about `f3`, but would like to include it in the model to see if interacts with the other levels. If I get an `f1:f2` interaction but not a `f1:f2:f3` interaction, why would it not make sense to look at `f1:f2`? – locus Nov 04 '18 at 17:38