I'm not sure where the allFit
function comes from, but your code produces errors on my system.
Anyway, I don't really agree with the premise that a model with a singular fit are said to correspond to estimates of zero random-effects variance in a model with random-intercepts only.
That could be one reason, but in my experience it is more often the case that singular models occur when the random effects are overfitted - very often too many random slopes - relative to the data. Another common reason is where there is very little variation in the slopes within some grouping factor, such that random slopes for the variable in question are not needed. It is easy to construct a simple example:
set.seed(15)
n.school <- 20
dt <- expand.grid(sch.id = 1:n.school, ses = 1:10)
dt$Y <- 1
X <- model.matrix( ~ ses, data = dt)
myFormula <- "Y ~ ses + (ses | sch.id)"
foo <- lFormula(eval(myFormula), dt)
Z <- t(as.matrix(foo$reTrms$Zt))
betas <- c(1, 2)
s1 <- 3 # SD of random intercepts
s2 <- 0.1 # SD of random slopes
rho <- 0.4 # correlation between intercepts and slopes
cormat <- matrix(c(s1, rho, rho, s2), 2, 2) # correlation matrix
covmat <- lme4::sdcor2cov(cormat) # covariance matrix (needed for mvrnorm)
umat <- MASS::mvrnorm(n.school, c(0, 0), covmat, empirical = TRUE) # simulate the random effects
u <- c(rbind(umat[, 1], umat[, 2])) # lme4 needs the random effects in this order interleaved)
e <- rnorm(nrow(dt), 0, 2) # residual error
dt$Y <- X %*% betas + Z %*% u + e
lmer(myFormula, dt) %>% summary()
which resuls in a singular fit (correlation estimated at 1) where the real problem is that there is too little variance among the random slopes.
In my opinion, the reason we see this so often is due to the infamous paper by Barr et al (2013) "Keep it Maximal" where the advice is basically to fit random slopes, with correlations, for all fixed effects. Even one of the models in their paper has a singular fit, which they were not aware of (see the paper by Bates 2015 Parsimoneous Mixed Models for more details).