4

I'm confused to separate all the different meanings and connections.

The background of my question: On the one hand related to lmer models and on the other hand to the goodness of a fit. And their relationship (if there is one). For example I've read that one can use chi squared to test the goodness of a fit though it's not the best method ()?

To begin: Typically (at least for lme4 in R) the residuals are calculated using the so-called pearson test. I read that this is better than a chi square test which, in turn, is better than a "reduced" chi square test. Then there are likelihood and sum of squares to estimate the coefficients of lmer models (correct me if I'm wrong).

What puzzles me: What is the advantage of Pearson over chi square and what is the advantage of chi square over reduced chi square? Next, that's probably dumb: When will a model use likelihood or sum of squares to estimate coefficients? Why isn't this calculated by Pearson, too?

edit: Just saw this question Chi-square test: difference between goodness-of-fit test and test of independence which might also (additionally) be connected to my question.

Ben
  • 2,032
  • 3
  • 14
  • 23
  • Don't worry about being downvoted for lack of knowledge. As long as your prepare the content of your question thoroughly and format it nicely you should get a nonnegative score for the question. However if you have several different questions you should ask several different questions. – Ferdi Oct 27 '17 at 17:07
  • 1
    Up to now I thought only advanced questions are allowed because basics can be looked up everywhere. What is almost true but, for example, in my case I think I do not benefit from reading more. More and more questions arise and I'm not able to get them apart.. probably it's because of the nature of statistics: Different procedures use here and there the same methods and in some cases not. It's not for nothing a science.. :) – Ben Oct 27 '17 at 17:14

0 Answers0