I have been reading different questions about how easy it is to bump into singularities when fitting mixed effects models with glmer()
. In general, the idea is that singularities might arise from very complex random structures. If the random structure is simple, it might also happen when the data is not enough to calculate the variance-covariance matrix... see for example this page by Ben Bolker, Robert Long's answer to this post or the help page of isSingular()
.
However, the model I'm trying to fit is very simple:
mod.detection_rand <- glmer(reaction ~ Pedra + (1|Channel), family="binomial", data = garotes)
boundary (singular) fit: see ?isSingular
... and apparently I have enough data for the different (fixed and random) predictor variable combinations:
library(tidyverse)
garotes %>%
group_by(Channel, Pedra) %>%
summarise(n = n())
# A tibble: 16 x 3
# Groups: Channel [8]
Channel Pedra n
<int> <fct> <int>
1 1 No 13
2 1 Yes 13
3 2 No 14
4 2 Yes 12
5 3 No 12
6 3 Yes 14
7 4 No 13
8 4 Yes 13
9 5 No 13
10 5 Yes 13
11 6 No 14
12 6 Yes 12
13 7 No 13
14 7 Yes 13
15 8 No 14
16 8 Yes 12
What do you think?
EDIT: Here's the summary of the model, summary(mod.detection_rand)
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: reaction ~ Pedra + (1 | Channel)
Data: garotes
AIC BIC logLik deviance df.resid
261.5 271.5 -127.7 255.5 205
Scaled residuals:
Min 1Q Median 3Q Max
-1.8533 -0.9449 0.5396 0.5396 1.0583
Random effects:
Groups Name Variance Std.Dev.
Channel (Intercept) 0 0
Number of obs: 208, groups: Channel, 8
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.1133 0.1946 -0.582 0.56
PedraYes 1.3473 0.3066 4.394 1.11e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr)
PedraYes -0.635
convergence code: 0
boundary (singular) fit: see ?isSingular
EDIT2: Following Billy's comment:
bobyqa : boundary (singular) fit: see ?isSingular
[OK]
Nelder_Mead : boundary (singular) fit: see ?isSingular
[OK]
nlminbwrap : boundary (singular) fit: see ?isSingular
[OK]
nmkbw : boundary (singular) fit: see ?isSingular
[OK]
optimx.L-BFGS-B : boundary (singular) fit: see ?isSingular
[OK]
nloptwrap.NLOPT_LN_NELDERMEAD : boundary (singular) fit: see ?isSingular
[OK]
nloptwrap.NLOPT_LN_BOBYQA : boundary (singular) fit: see ?isSingular
[OK]
EDIT3: Following Isabella's answer:
I checked the structure of the outcome variable (reaction
). Here's the table of outcomes:
library(tidyverse)
garotes %>%
group_by(Channel, Pedra, reaction) %>%
summarise(n = n()) %>%
print(n = Inf)
# A tibble: 32 x 4
# Groups: Channel, Pedra [16]
Channel Pedra reaction n
<int> <fct> <int> <int>
1 1 No 0 6
2 1 No 1 7
3 1 Yes 0 3
4 1 Yes 1 10
5 2 No 0 7
6 2 No 1 7
7 2 Yes 0 2
8 2 Yes 1 10
9 3 No 0 8
10 3 No 1 4
11 3 Yes 0 6
12 3 Yes 1 8
13 4 No 0 7
14 4 No 1 6
15 4 Yes 0 3
16 4 Yes 1 10
17 5 No 0 8
18 5 No 1 5
19 5 Yes 0 1
20 5 Yes 1 12
21 6 No 0 6
22 6 No 1 8
23 6 Yes 0 2
24 6 Yes 1 10
25 7 No 0 6
26 7 No 1 7
27 7 Yes 0 2
28 7 Yes 1 11
29 8 No 0 8
30 8 No 1 6
31 8 Yes 0 4
32 8 Yes 1 8
Apparently, there are both types of outcomes for all Channels
and all Pedra
treatments... so it is not like the example Isabella presented... furthermore, I tried to model this GLMM with the library(GLMMadaptive)
and it did not converge either.
EDIT4: The data set I'm using, in case someone's curious.
Channel Pedra reaction
1 No 1
2 No 0
3 No 0
4 No 0
5 No 0
6 No 1
7 No 0
8 No 0
1 No 1
2 No 1
3 No 1
4 No 1
5 No 0
6 No 0
7 No 0
8 No 0
1 No 0
2 No 1
3 No 0
4 No 0
5 No 0
6 No 0
7 No 0
8 No 1
1 No 0
2 No 1
3 Yes 0
4 Yes 1
5 Yes 1
6 Yes 1
7 Yes 1
8 Yes 0
1 Yes 1
2 Yes 1
3 Yes 0
4 Yes 0
5 No 0
6 No 1
7 Yes 1
8 Yes 1
1 Yes 0
2 Yes 1
3 Yes 1
4 Yes 1
5 Yes 1
6 Yes 0
7 No 1
8 No 1
1 Yes 1
2 Yes 1
3 Yes 1
4 Yes 1
5 Yes 1
6 Yes 1
7 Yes 1
8 Yes 1
1 Yes 1
2 Yes 1
3 Yes 1
4 Yes 1
5 Yes 0
6 Yes 1
7 Yes 1
8 Yes 1
1 Yes 1
2 Yes 1
3 Yes 0
4 Yes 1
5 Yes 1
6 Yes 1
7 Yes 0
8 Yes 0
1 Yes 1
2 Yes 1
3 Yes 0
4 Yes 0
5 Yes 1
6 Yes 1
7 Yes 1
8 Yes 0
1 Yes 1
2 Yes 1
3 Yes 0
4 Yes 1
5 Yes 1
6 Yes 1
7 Yes 0
8 Yes 0
1 Yes 1
2 Yes 0
3 Yes 1
4 Yes 0
5 Yes 1
6 Yes 1
7 Yes 1
8 Yes 1
1 Yes 1
2 Yes 1
3 Yes 0
4 Yes 1
5 Yes 1
6 Yes 0
7 Yes 1
8 Yes 1
1 Yes 1
2 Yes 1
3 Yes 1
4 Yes 1
5 Yes 1
6 Yes 1
7 Yes 1
8 Yes 1
1 Yes 0
2 Yes 0
3 Yes 1
4 Yes 1
5 Yes 1
6 Yes 1
7 Yes 1
8 Yes 1
1 Yes 1
2 No 0
3 Yes 1
4 No 1
5 Yes 1
6 No 1
7 Yes 1
8 No 1
1 No 0
2 Yes 1
3 No 0
4 Yes 1
5 No 1
6 Yes 1
7 No 1
8 Yes 1
1 Yes 0
2 No 1
3 Yes 1
4 No 0
5 Yes 1
6 No 1
7 Yes 1
8 No 0
1 No 0
2 No 1
3 No 1
4 No 0
5 No 1
6 No 0
7 No 0
8 No 0
1 No 1
5 No 0
3 No 1
4 No 1
2 No 1
6 No 0
7 No 1
8 No 0
1 No 0
5 No 0
3 No 0
4 No 0
2 No 1
6 No 0
7 No 0
8 No 0
1 No 1
5 No 1
3 No 1
4 No 0
2 No 0
6 No 1
7 No 1
8 No 0
1 No 1
5 No 0
3 No 0
4 No 1
2 No 0
6 No 1
7 No 1
8 No 1
1 No 1
5 No 1
3 No 0
4 No 1
2 No 0
6 No 1
7 No 1
8 No 1
1 No 1
5 No 1
3 No 0
4 No 0
2 No 0
6 No 1
7 No 0
8 No 0
1 No 0
5 No 0
3 No 0
4 No 1
2 No 0
6 No 0
7 No 1
8 No 1
Thank you very much for all of your responses, in any case! Learning a lot from them!