0

Forgive my very limited understanding. I am trying to learn about maximum likelihood estimation, and how it differs from least-squares estimation. From reading a little, I understand that the two are equivalent when the errors are Gaussian-distributed (see this question and this question for example).

Could anyone provide a concrete example to demonstrate a case when the errors are non-Gaussian that demonstrates LSE failing, and instead MLE correctly giving back the true parameter?

I am basically looking for a practical example which demonstrates when MLE is able to solve a problem that LSE cannot, but I can't visualise such an example for myself at the moment. Something simple, at the measuring-the-length-of-my-desk type of level.

I hope my question is clear enough, thank you!

teeeeee
  • 135
  • 7
  • hi: aside from OLS, least squares won't result in a closed form solution when the model is non-linear so how you apply it to non-linear problems is not clear to me. unless by least squares you mean "minimizing the squares of the error terms " ? – mlofton Jun 30 '20 at 16:26
  • I just mean linear least squares fitting, that's all. – teeeeee Jun 30 '20 at 16:29
  • The standard example concerns estimating the location of a lighthouse (with a rotating light) based on random observations of where the light hits the coastline. No matter how many observations you make, the OLS estimate never improves over the estimate based on a single observation, whereas MLE converges to the true value. See https://stats.stackexchange.com/questions/36027 and other threads on the Cauchy distribution. – whuber Jun 30 '20 at 17:26
  • I think this answer https://stats.stackexchange.com/a/317696/290032 is the closest thing I've seen that helps to highlight the difference. As you can see, the answer isn't very fleshed out. I would like to see a simple distrubution of dummy data if possible. Thank you! – teeeeee Jun 30 '20 at 22:57
  • I think that example is interesting but I also think it's not conveying the general idea. Any time the true underlying model ( the model describing the response $y$ ) is NOT LINEAR, least squares is not appropriate. The more difficult part is specifying what the appropriate model is. Take a model where the response $y$ is a simple logistic function of x. That model, fitted by least squares, won't do as well as the model fitted using the logistic function. So, that's an example where least squares will do worse. – mlofton Jun 30 '20 at 23:13
  • I see. I guess maybe I am missing the general idea then. For example, I often make measurements in the lab, and during the analysis we would fit to the (noisy) data using least squares fits. Now the function that we choose for these fits will vary depending on the process - sometimes exponential function, sometimes linear, sometimes lorentzian, etc. So in doing the least squares here I guess we are assuming the errors on those measurements are Gaussian. I would like an example to show when MLE would be more appropriate instead. – teeeeee Jun 30 '20 at 23:27
  • That's precisely what the lighthouse problem illustrates. – whuber Jul 01 '20 at 14:24
  • @whuber: apologies. I didn't read that example carefully and still haven't but I will. I was referring to the example at the link. teeeeee. not only are you assuming that errors are gaussian. you are also assuming that the FUNCTIONAL RELATIONSHIP is linear.. Those are the only but crucial assumptions of OLS. – mlofton Jul 01 '20 at 16:39
  • teeeeee: One more thing. I wouldn't say "when the MLE would be more appropriate". It's better to say when "when a different model would be more appropriate". How the model estimates are obtained is a different issue and not really what you're asking about. That's why I was confused initially with your question when I mentioned "closed form" solution etc. – mlofton Jul 01 '20 at 16:41

0 Answers0