2

When reading books on statistical estimation I encounter the terms asymptotically unbiased frequently. I don't understand the intuitive meaning and math behind the term.

Also please explain how asymptotically unbiased estimator is different from unbiased estimator?

GeorgeOfTheRF
  • 5,063
  • 14
  • 42
  • 51

1 Answers1

4

The basic difference between being unbiased and asymptotically unbiased is that the former concerns a fixed, finite sample size $n$ while the latter concerns the limit as $n \rightarrow \infty$.

An estimator is unbiased if, under repeated sampling, it takes on the value of the population parameter. Consider that I am interested in student ability in math as reflected on a math test. The true population parameter is $\mu$. I have a sample of size $n$ of student test scores and I take the mean $\bar x_1$. In this case, $\bar x_1 \lt \mu$. I take another sample of size $n$ and find the mean $\bar x_2$. Due to sampling error, it is likely that $\bar x_1 \ne \bar x_2$. In this case, $\bar x_2 \gt \mu$. However, over the long run of finite samples, the mean of the sampling distribution should take on the value of $\mu$. Some samples will be like the first and underestimate $\mu$, while some will be like the second and overestimate $\mu$. Over many samples, however, $E(\bar x) = \mu$.

An estimator is asymptotically unbiased if, as the sample size reaches infinity, the limit of the estimator equals the population parameter: $\lim \limits_{n \to \infty} E(\bar x) = \mu$. This means that larger sample sizes converge on the population parameter. We can't expect a sample of $n=10$ to give us the exact parameter $\mu$. But as we increase the sample size, we get closer and closer to $\mu$. If we had a theoretically infinitely large sample, then $\bar x = \mu$.

Note: This difference is important because an estimator may be biased in a finite sample, but asymptotically unbiased. Variance is an example. Consider $\sigma^2$ to be the population parameter and its estimator is $$\hat \sigma^2 = \frac{1}{n}\sum_{i=1}^n(X_i - \hat X)^2$$ which is not an unbiased estimator. But the limit of $E(\hat \sigma^2)$ as $n \rightarrow \infty$ equals $\sigma^2$.

paqmo
  • 686
  • 4
  • 8
  • Good but you do have to make some assumptions about the population distribution. For instance if you assume a Gaussian distribution the example is perfect. I am thinking that this won't work for distributions with heavy tails for which the variance does not exist. – Michael R. Chernick Dec 06 '16 at 22:56
  • @MichaelChernick Ah true I did very much assume that! Thanks for pointing this out. – paqmo Dec 06 '16 at 23:20