I understand that to run a multilevel model like
$ y_{ij} = \gamma_{00} + \gamma_{10}x_1{_{ij}} + u_{0j} + e_{ij} $
in lme4 I can use
lmer("y ~ 1 + x + (1 | group)", data = data)
or get the same result with
lmer("y ~ x + (1 | group)", data = data)
That is, if I don't state what should be done, lme4 assumes I want the overall intercept. I noticed that I can obtain
$ y_{ij} = \gamma_{10}x_1{_{ij}} + u_{0j} + e_{ij} $
by running
lmer("y ~ 0 + x + (1 | group)", data = data)
At first I thought that if $\gamma_{00}$ was going to be close to 0 anyway it wouldn't make much difference whether I included it or not, but by analogy with this answer it seems like it would potentially bias the other other parameters even if the intercept is statistically non-significant.
I'm aware from this question that it can sometimes be appropriate to remove the intercept in regular regression, and so assume it can be good idea to also remove the intercept in multilevel modelling if we have some good reason to believe the data-generating process is forced to go through the origin.
But are there any reasons for removing the overall intercept that are specific to multilevel modelling?