Forgive my very limited understanding. I am trying to learn about maximum likelihood estimation, and how it differs from least-squares estimation. From reading a little, I understand that the two are equivalent when the errors are Gaussian-distributed (see this question and this question for example).
Could anyone provide a concrete example to demonstrate a case when the errors are non-Gaussian that demonstrates LSE failing, and instead MLE correctly giving back the true parameter?
I am basically looking for a practical example which demonstrates when MLE is able to solve a problem that LSE cannot, but I can't visualise such an example for myself at the moment. Something simple, at the measuring-the-length-of-my-desk type of level.
I hope my question is clear enough, thank you!