3

I have an experiment where subjects reported multiple results (binary) in two treatments. I have compared each subject separately to see if the treatment had an effect on a given subject, but would also like to compare the data as a whole. I have gone with a Generalized Linear Mixed Effects Model (I have never done this type of analysis before).

I'm using the lme4 package in R and the glmer function, and I want to see the effect the treatment has on the results, so I have done the following:

model <- glmer(Response ~ Treatment + (Treatment|Subject), data=data, family=binomial(link=logit))

So the model puts the treatment as the fixed effect and takes into account the difference in the treatment for each subject as the random effect. I am then looking at the significance of the treatment in the model. Does that make sense or am I off my rocker?

Model output:

Fixed Effects:

                   Estimate   Std. Error   z value    PR(>|z|)
(Intercept)       -1.5066     0.2466       -6.109     1e-09 ***
Treatment         -0.6620     0.2803       -2.362     0.0182*

Random Effects:

                   Groups   Name        Variance   Std.Dev.  Corr
                   Subject  (Intercept) 0.4085     0.6391    
                            Treatment   0.002103   0.04585   -1.00

Thanks!

DanE
  • 73
  • 8
  • 1
    Makes sense to me. I may possible worry about overfitting by including treatment as a random effect in addition to the fixed effect. Does the fixed effect estimate significantly change if you remove it as a random effect? Also that corr estimate of -1 seems odd. – Glen Feb 20 '19 at 20:20
  • 1
    Is the difference in treatment amongst participants really random or is it just that you are randomising them? If the latter then I don't think you need to make treatment a random effect, as the `Subject` variable takes care or that. Try making another model with `(1|Subject)` as the random effect. Then perform a likelihood ratio test (`anova(model1, model2)`). If the model without the trestment random effect is the same of superior then use that. Also if this is a repeated measures design you should probably account for those repeats in the fixed effects part of the model somewhere. – llewmills Feb 20 '19 at 20:21
  • 1
    I think your model is overparameterized as @llewmills mentions. See here: https://stats.stackexchange.com/questions/323273/what-to-do-with-random-effects-correlation-that-equals-1-or-1. If the order of treatments were randomized then include that order term as a fixed effect. – Glen Feb 20 '19 at 20:30
  • I removed the treatment in the random effect i.e. now it's (1|Subject) and it was pretty much the same the LRT came back with 0.98 so not seeing a difference there. I thought since each subject was in both treatments I'd have to include the connection of both in the random effects. @llewmills how would you suggesting incorporating the repeated measures into the fixed effects? – DanE Feb 20 '19 at 20:33
  • @Glen thanks. The order of treatments was randomized, some did treatment 1 first and some did treatment 2 first. I did a bit of analysis on the side to see if it was having an effect and it didn't appear so. Would you suggest something like Treatment*Position, where position is just an indicator of which treatment they did first? – DanE Feb 20 '19 at 20:35
  • 1
    Oh i see so it sounds like `treatment' already is the repeated measures fsctor (it has multiple levels i gather, and a single subject can be observed at some or all of those levels?). If so I think your model might be good to go – llewmills Feb 20 '19 at 20:40
  • 1
    Yes I would suggest adding treatment*position. – Glen Feb 20 '19 at 20:47
  • @llewmills sorry I should be more clear, hard to explain all the components without writing a novel...Each subject does both treatments and within each treatment they have to answer 25 questions where the response is 0/1. The treatment order is randomized, some did treatment 1 first and some did treatment 2 first. The order of the questions are randomized between each treatment...i.e. "question 37" appears half of the time in treatment 1 and half of the time in treatment 2...and within each treatment...i.e. "question 37"'s placement was random within each treatment. – DanE Feb 20 '19 at 20:47
  • 1
    Ah. This is much more complex. I think question within treatment might be a random effect. – llewmills Feb 20 '19 at 21:12
  • @llewmills Hmm this is getting tricky. Because each group of questions within each Treatment for each Subject is different. I certainly need the random effect of the Subject but then I also need the question number (QN). So for the random effects something like (1|Subject) + (Subject|Treatment:QN). The (1|Subject) should cover any random effects of the subjects alone. While (Subject|Treatment:QN) is the random effects of the subjects which share on combinations of Treatment and Question number. – DanE Feb 20 '19 at 22:08
  • 1
    Yes it sounds like it. This is why I asked about your sample size. It may be you have too many random effects to achieve convergence. I'm sorry to do this to you but I can *highly* recommend Pinhiero and Bates' book *Mixed Effects Models in S and S-Plus*. Chapter 1 walks through some common repeated measures designs, with example code. I'm sure it will help you a lot. – llewmills Feb 21 '19 at 03:04
  • 1
    ...and all the examples work in R – llewmills Feb 21 '19 at 03:05
  • 1
    Perhaps `(1|Subject) + (1|QN/Treatment)`? Random effects such as these can be fiendishly confusing. There are so many ways to model them. – llewmills Feb 21 '19 at 03:08
  • @llewmills thanks for the reference, I will definitely check it out. The model converged easily with what you suggested. I'll do a bit more reading before I choose a final model...but for now it's... Treatment*TreatmentPosition + (1|Subject) + (1|QN/Treatment). The model saw the treatment for the Treatment fixed effect significant, while the TreatmentPosition and combination of Treatment and Treatment position were insignificant. Thanks for all the help! – DanE Feb 21 '19 at 13:56
  • 1
    That's great news @DanE, congrats! That's what you hypothesised I hope. Perhaps you could up-vote some of my comments? I need the rep :) – llewmills Feb 22 '19 at 02:54

0 Answers0