2

I'm reading from different sources (whuber's answer on R^2, another source) that when using R^2 one needs to be careful with regard to interpretation - both in linear and non-linear models.

In linear models R^2 makes sense, because one has the relationship $S_\text{tot}=S_\text{reg}+S_\text{error}$, but in non-linear regression this relationship is no longer valid. I wonder - why is this relationship no longer valid in non-linear regression? Is there an intuitive explanation for this, or is it purely mathematical?

Reading from this discussion, I quote:

"There is a good reason that an nls model fit in R does not provide r-squared - r-squared doesn't make sense for a general nls model.

One way of thinking of r-squared is as a comparison of the residual sum of squares for the fitted model to the residual sum of squares for a trivial model that consists of a constant only. You cannot guarantee that this is a comparison of nested models when dealing with an nls model. If the models aren't nested this comparison is not terribly meaningful."

Can one not have a non-linear nested model, such that this comparison is meaningful? Is there any case in non-linear regression where R^2 will be meaningful?

Erosennin
  • 1,384
  • 17
  • 31

0 Answers0