There are two main reasons to standardise predictors: optimisation performance and interpretability of results. When you glmer estimates the model parameters without problems (no warning messages; estimates and their standard errors reasonable) then you don't need to standardise your predictors.
Depending on issues such as sample size, collinerarity of predictors, including non-orthogonal polynomial terms, complexity of the hierarchical structure and distribution, glmer's internal optimisation may find it difficult to estimate model parameters. In this case, standardisation can help.
That said, I have yet to encounter a situation where glmer struggled. It seems to be a particularly well-crafted bit of model fitting.
Which leaves interpretability of predictors: parameter estimates of standardised predictors can be directly compared: the larger the absolute value, the more "important" (rather: influential) they are for the response. When transforming, consider Andrew Gelman's advice to use 2 times the standard deviation (rather than one) when you also have categorical predictors (https://statmodeling.stat.columbia.edu/2006/06/21/standardizing_r/ ) or re-code dummy categories to -1/1 (rather than 0/1: http://andrewgelman.com/2009/06/09/standardization/).
This question has been asked in various guises, e.g. Why shouldn't I standardize my predictors when putting them into a regression model?