I am using glmer() in r to run a mixed logistic regression with 3 categorical (dichotomous) predictors. The outcome measure is whether or not a participant responded correctly to a memory check. This memory check is administered twice, so I include a random intercept for participant. When I run the model I am finding that some of the coefficients are huge, including the intercept. One variable is the time at which the memory check was administered (before or after the experiment). At pretest there is almost perfect performance and at posttest it is around 80%. The OR is in the millions, but the ChiSq seems reasonable at 6.788.
I'm not sure why the intercept OR would be so large but performance on around 200 trials is at about 90%.
Edit: It appears this probably is an issue of separation, so my questions are: 1) Are the statistics still valid (Chi Sq and p value)? 2) Is there another way to obtain valid estimates and ORs (would I have to calculate by hand?) 3) Should I use a different model, and if yes, what kind? Binomial test has been suggested but this doesn't seem to address the repeated measures aspect of the design? 4) Another solution would be to drop the predictor that has near-perfect performance at one level. This would prevent me from testing change in performance across time, which might be fine.
Here is a sample of what my data look like:
> head(d.mem)
id agemonths ingroupwaited personalconnect mem.time mem.acc
1 66 0 0 0.5 1
1 66 0 0 -0.5 0
2 69 1 1 0.5 1
2 69 1 1 -0.5 1
And my model:
glmer(mem.acc ~ ingroupwaited + personalconnect + mem.time + (1|id), d.mem, family=binomial(logit)