When you have a random intercept, it's like have an exchangeable correlation structure, and when you have random intercepts and slopes, it's like having an AR-1 correlation structure... assuming the random effects are simple rather than cross-nested. These covariance matrices actually cover a few cases that random effects do not. Syntactically they're often specified in a very similar way, which can be misleading.
You will recall that for linear models with identity link, the interpretation of effects for GEE and mixed models is that GEE estimates "population averaged" effects and mixed models estimates "individual level" effects. This convention borrows over to non-identity links such as binomial and Poisson models. The GEE estimates marginal effects, interpreted as
"population averaged" whereas the mixed model estimates "individual effects". Dovetail this with the proper interpretation of effect for logit and poisson models, and the only remaining challenge is actually specifying and fitting these models.
As an example, suppose I was interested in the rate of asthma exacerbations before and after enacting a public policy to reduce air pollution. Suppose this policy is just unbelievably effective: I close all industry and force people to buy electric cars. An inadvertent effect is that severe asthmatics actually move to my city seeking better quality of life. Now I have data on hospitalizations for asthma attacks over time. In my GEE model I say asthma hospitalizations go UP after enacting this policy. In my mixed model, I say asthma hospitalizations go DOWN after enacting this policy. They are both right, it's just that I did not have quite as many severe asthmatics living in my city before cleaning the air. The mixed model is useful because I can predict that those who moved to my city had far worse asthma exacerbations before I ever observed them. The GEE is useful because I know my hospitals will actually have more patients in them seeking treatment for asthma.
Example (in R!)
library(geepack)
library(lme4)
set.seed(1234)
n <- 5000
id.wide <- 1:n
year <- 2000:2010
post <- year > 2007
bl.sev <- sample(0:2, n, prob = c(0.5, 0.25, 0.25), replace=T) ## baseline severity
id.mig <- id.wide > n*0.65 ## the last 30% migrate in
minid.mid <- min(id.wide[id.mig])
bl.sev[id.mig] <- 2 ## all migrants are severe
id.long <- rep(1:n, each=length(year))
year.long <- rep(year, n)
ae.int <- -4 + c(0, 1.2, 5.3)[bl.sev[id.long]+1] + ## intensity of asthma exacerbation
-0.1 * (year.long > 2007)
# ae.int <- -4 + c(0, 1.2, 5.3)[bl.sev[id.long]+1] ## intensity of asthma exacerbation
# ae.int <- -1 * (year.long > 2007) + 1*(bl.sev[id.long]==2) ## with an interaction effect (improves mild/moderate only)
ae <- rpois(length(id.long), exp(ae.int))
ae[id.long > minid.mid & year.long <= 2007] <- NA ## drop AEs from severe asthmatics during pre period
post.long <- year.long > 2007
glm(ae ~ post.long, family=poisson)
geefit <- geeglm(ae ~ post.long, id=id.long, family=poisson, corstr = "exchangeable")
mefit <- glmer(ae ~ post.long + (1|id.long), family=poisson)
summary(geefit)
summary(mefit)
The essential parts of the output are:
From the GEE
Coefficients:
Estimate Std.err Wald Pr(>|W|)
(Intercept) 0.54562 0.01561 1221.25 <2e-16 ***
post.longTRUE 0.00304 0.00670 0.21 0.65
From the mixed effects model:
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -1.2041 0.0441 -27.28 < 2e-16 ***
post.longTRUE -0.0411 0.0123 -3.35 0.00082 ***
Which shows effects of the desired opposite direction. Getting "significant" results is just a matter of ramping up sample size.