0

Consider the problem of the choice of estimator of $\sigma^2$ based on a random sample of size $n$ from a $N(\mu,\sigma^2)$ distribution.

In undergraduate, we were always taught to use the sample variance

$$\hat{s}^2 = \dfrac{1}{n-1}\sum_{i=1}^{n}\left(X_{i}-\bar{X}\right)^{2}$$

instead of the maximum likelihood estimator

$$\hat{\sigma}^2 = \dfrac{1}{n}\sum_{i=1}^{n}\left(X_{i}-\bar{X}\right)^{2}.$$

This is because we learned that $\hat{s}^2$ is an unbiased estimator and that $\hat{\sigma}^2$ is biased.

However now I'm studying for a PhD and I've read that we choose estimators based on minimizing mean square error (=bias$^2$ + var).

It can be shown that $$mse(\hat{\sigma}^2) < mse(\hat{s}^2 ).$$

So, why do most people use $\hat{s}^2$?

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
Kian
  • 466
  • 2
  • 16
  • You *might* choose MMSE, its a fine criterion, but that doesn't mean you have to use it. – Glen_b Oct 15 '13 at 13:33
  • 1
    For the normal it gives a divisor of $n+1$, but one problem is you don't actually know what distribution you really have. Yet the $n-1$ form is unbiased for every distribution. I often just use ML, but I'm generally as happy with $n-1$, and not averse to $n+1$, even though I rarely use it. It's only a hard choice when $n$ is small. – Glen_b Oct 15 '13 at 13:39

0 Answers0