1

Suppose we have the a random variable $X$, which is normally distributed $N(\theta,1)$ and we estimate $\theta$ using the squared error loss function, $L = (\theta, d) = (\theta - d)^2$. To find the Bayes estimator, we find $d$ which minimizes the expected loss and we obtain that the Bayes estimator is the posterior mean. Isn't this the same procedure to also find the maximum likelihood estimator of $\theta$? Wouldn't we also aim to minimize the expected loss in that case? I struggle to see what the difference is between the two concepts, at least in this context.

Bill
  • 556
  • 1
  • 9
  • The maximum likelihood estimate selects the single point of highest likelihood and ignores all other parameter values no matter how close their likelihood is to the maximum. The case where the two approaches will give similar results are when the Bayesian method uses a uniform prior and you have enough data for the likelihood function to be well behaved around the maximum. Interestingly, Bayesian methods are most popular in cases where the amount of data is insufficient to give a well behaved global likelihood. – abstrusiosity Nov 08 '20 at 15:48

0 Answers0