1

I am having difficulty with the structure of a binomial mixed effects model. I'm using brms, but my question is more about model design than bayesian modeling so I hope to get some good insights from the broader audience here.

Briefly, I have a list of ~200 individuals that have been tested in each of 15 conditions, with a binary response (T/F) in each condition. Each of the conditions falls into one of three groups, although unevenly (3 conditions in group A, 9 conditions in group B, 3 conditions in group C). I want to test whether there is evidence for differences between the three groups.

The current structure of my model is: response ~ group + (1|condition) + (1|ID)

and the code, for those interested in the specific brms model:

bern_prior <- set_prior('normal(0, 3)')
FB_bern_group <- brm(response ~ group + (1|condition) + (1|ID),
             data = FBSpp_long,
             family = bernoulli(),
             prior = bern_prior,
             save_pars = save_pars(all = TRUE),
             warmup = 10000,
             iter = 20000,
             chains = 3)

When I use this model structure, however, differences between this model and the null model (i.e. response ~1 + (1|condition) + (1|ID)) are negligible (weighted AIC values are almost identical). Before I conclude there really is no difference among groups, I wanted to ask: is it possible I overdid the random effects by including (1|condition)?

rstewa03
  • 51
  • 3

1 Answers1

1

I don't think you have overdone the random effects. Responses are cross-classified within persons and conditions. You have 15 conditions, which is probably sufficient for treating condition as a random intercept. An alternative is to treat condition as a fixed categorical predictor, but in doing that, I am not sure you will be able to model a fixed slope for group. That is, if you know what condition an observation was in, you also know its associated group, so group would be collinear w/ condition.

You may want to explore variation in the slope of group across individuals by augmenting your ID random intercept to include a random slope for group - (group|ID). You will need to check how brms handles a categorical predictor specified as a random slope. There are many options for this in lme4. See Michael Clark's helpful tutorial on how to deal with multicategory variables as random slopes.

With brms, the (0+group|ID) may work. But if not, you will have to create separate 0/1 variables for each of the groups and include them instead. You may want to remove the covariance between the ID intercept and the separate group slopes, although these may not be as problematic when simulating via MCMC.

Erik Ruzek
  • 3,297
  • 10
  • 18
  • 1
    Thanks for your suggestions! I tried exploring variation in slopes across groups, and while it reduced WAIC values, the trace of different chains were skewed and the conditional effects estimates did not fit the data well, so I am planning on sticking with the original random intercepts model. – rstewa03 May 06 '21 at 08:54