Basically $t$ is just $\beta/\mathrm{SE}(\beta)$, where $\beta$ is regression parameter. There is nothing misleading in this value if you consider it as this ratio, or as "standarized" parameter. If you look at Bates' original arguments against $p$-values in lme4 he writes mostly about the degrees of freedom that are problematic rather than the $t$ of $F$ values themselves (see also r-sig-mixed-models FAQ). Notice that different statistical software can have different naming convention, e.g. as SPSS calls parameters as $B$'s and standarized parameters as $\beta$'s -- lme4 follows the lm
convention to call them Estimate
and t value
.
Pinheiro and Bates describe usage of $p$-values in "Mixed-Effects Models in S and S-PLUS", so it is hard to look for arguments against them in this book. The ratios are also discussed by Bates in "lme4: Mixed-effects modeling
with R" in comparison to $t$ and $F$ values for fixed effects models, for example (p. 70):
In a fixed-effects model the profile traces in the original scale
will always be straight lines. For mixed models these traces
can fail to be linear, as we see here, contradicting the
widely-held belief that inferences for the fixed-effects
parameters in linear mixed models, based on $T$ or $F$ distributions
with suitably adjusted degrees of freedom, will be completely
accurate. The actual patterns of deviance contours are more
complex than that.
what makes them somehow similar while not exactly adequate as we would expect them to be for proper hypothesis testing.
Notice also that other authors not always consider the df issue to be problematic, e.g. Gałecki and Burzykowski in "Linear Mixed-Effects Models
Using R" just assume $n-p$ degrees of freedom and treat their distribution as approximately $t$, e.g. (p. 84):
The null distribution of the $t$-test statistic is the $t$-distribution
with $n − p$ degrees of freedom.
and (p. 140):
Confidence intervals for individual components of the parameter vector
$\beta$ can be constructed based on a $t$-distribution used as an approximate
distribution for the test statistic
So it seems that the main rationale is that while $p$-values can be misleading because of unclear null distribution, $t$ values can still be useful, at least as standardized parameters. You can also use them for hypothesis testing but you need to make some assumption about their distribution and verify them by looking at profile plots.
What Bates seems to be saying is that you use them at your own risk.