I'm confused to separate all the different meanings and connections.
The background of my question: On the one hand related to lmer models and on the other hand to the goodness of a fit. And their relationship (if there is one). For example I've read that one can use chi squared to test the goodness of a fit though it's not the best method ()?
To begin: Typically (at least for lme4 in R) the residuals are calculated using the so-called pearson test. I read that this is better than a chi square test which, in turn, is better than a "reduced" chi square test. Then there are likelihood and sum of squares to estimate the coefficients of lmer models (correct me if I'm wrong).
What puzzles me: What is the advantage of Pearson over chi square and what is the advantage of chi square over reduced chi square? Next, that's probably dumb: When will a model use likelihood or sum of squares to estimate coefficients? Why isn't this calculated by Pearson, too?
edit: Just saw this question Chi-square test: difference between goodness-of-fit test and test of independence which might also (additionally) be connected to my question.