2

Suppose a researcher estimated a parameter in parametric way with correct distributional assumption.

Another researcher estimated the same parameter in non-parametric way.

Will there any difference of accuracy (bias) in estimation in these two situations?

user81411
  • 731
  • 1
  • 7
  • 14

1 Answers1

3

Having a correct distributional assumption doesn't automatically determine what estimator you use. Knowing that I have an exponential distribution with rate parameter $\lambda$ (for example) doesn't tell me whether in estimating $\lambda$ I should use maximum likelihood or method of moments or method of quantiles or minimum Kolmogorov-Smirnov distance or something else. Some estimators given a parametric assumption may be biased and some may be unbiased.

Some commonly used estimators for parametric situations are biased.

It's even possible to be in a situation where no unbiased estimator exists - e.g see here or here.

If any assumptions required for the nonparametric method to yield a suitable estimator for the parameter hold, the estimator might still be biased, or it may be unbiased.

It's quite possible for a nonparametric estimator to be unbiased while a parametric one is biased. That doesn't mean that the nonparametric one is better, however.

As an example, consider estimating the population mean when sampling from a lognormal distribution. The sample mean is a nonparametric estimator (I don't need to make any specific distributional assumption to arrive at it, since I can simply rely on the weak law of large numbers), and it's unbiased, but the maximum likelihood estimator and the method of moments estimator - each based on the parametric assumption that it's actually lognormal - will both be biased.


Accuracy, considered more generally

Accuracy -- how close we are to the thing we're estimating -- is related to both bias and variability. An estimator that is biased may nevertheless be very accurate (in that the estimates tend typically to be very close to the thing being estimated).

It's this consideration that lead many people to look at measures like mean square error when comparing estimators.

If the distributional assumption is correct that will add information (and sometimes a lot of information, depending on what estimator is then chosen) over the distributional information in nonparametric methods -- which assumptions are usually very weak -- such as only assuming continuity or symmetry.

This added information - if fully utilized or nearly so -- will generally affect variability. It may not always lead to an estimator with less bias but it can lead to a quantifiably better estimator (more accurate in a particular sense).

Glen_b
  • 257,508
  • 32
  • 553
  • 939
  • I know precision measures the variability of an estimator. – user81411 Apr 01 '17 at 09:19
  • 1
    A precise estimator with low bias may handily outperform an imprecise one with no bias. If you want to be *accurate*, you can't ignore that. – Glen_b Apr 01 '17 at 09:21
  • How is to know whether the bias is "low"? – user81411 Apr 01 '17 at 09:22
  • 1
    You can compute the bias of the estimator under the distributional assumption. Bias is measured in the same units as the variable, so you can compare the bias of your estimators to the standard errors of the estimators.you're comparing (or, as would be suggested by the way it appears in mean square error, you might compare squared bias with variance). As a more general consideration, you could simply look at the sampling distribution of two estimators (even looking at histograms from simulations for example) and see that the bias is small relative to the spread. – Glen_b Apr 01 '17 at 09:26
  • In my case, the distribution of the estimator is very complex to derive. I can hardly check the bias of the estimator under the distributional assumption. Is there any other way to know whether the bias is low? – user81411 Apr 01 '17 at 09:31
  • Variability can be seen from histogram. But checking bias from histogram? Not understanding how is to observe this. – user81411 Apr 01 '17 at 09:36
  • 1
    1. Generate many samples and obtain an estimate for each - enough that the mean of the distribution of the estimator is pretty precisely determined. 2. Draw the histogram of the estimates. 3. Compute and mark on the sample mean of the estimates. 4. Mark on the true (population) parameter. The bias is now visually obvious and can be compared with the typical distance of the estimates from their own mean. – Glen_b Apr 01 '17 at 11:37