3

Let $Y_{ij} \stackrel{d}{=} N(\mu_i, \sigma^2)$ for $i\in \{1,\ldots,n\}, j\in\{1,2\}$. Also assume $Y_{i1}$ independant of $Y_{i2}$. The parameter of interest is $\sigma^2$.

Setting up the loglikelihood and deriving the mle's one finds: $$\hat \mu_i =\dfrac{Y_{i1}+Y_{i2}}{2} \qquad \hat{\sigma^2} = \dfrac{1}{4n}\sum_{i}(Y_{i1}-Y_{i2})^2$$

One can easily verify that this estimator is biased, because $$E[\hat{\sigma^2}] = \dfrac{1}{4n}\sum_i E[(Y_{i1}-Y_{i2})^2] = \dfrac{\sigma^2}{2}$$ (since $Y_{i1}-Y_{i2}\stackrel{d}{=}N(0,2\sigma^2)$

I have done all the derivations, but I don't really see what the problem is. This bias is easily resolved by considering a new estimator $\tilde{\sigma^2} = 2\hat{\sigma^2}$.

In all literature I've consulted the bias above is used to consider a transformed variable $V_i = \frac{Y_{i1}-Y_{i2}}{\sqrt{2}}$ which is distributed as $N(0,\sigma^2)$ and removes the nuisance parameters. This results in an unbiased estimator for $\sigma^2$ ($\hat{\sigma^2} = \frac{\sum_i v_i^2}{n}$) with greater variance. (loss of information)

The figure below shows a simulation study using $\mu_i \stackrel{d}{=} N(0,2)$ and $sigma^2 =1$

Question

Could someone explain the 'problem' in the Neyman-Scott problem? The bias doesn't seem like a real problem since $2\cdot \hat{\sigma^2}$ resolves the issue, without having to consider the proposed transformation. This unbiased estimator is exactly the same as the marginal mle.

Neyman Scott problem

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
dietervdf
  • 1,132
  • 1
  • 9
  • 20
  • 6
    The problem has nothing to do with the fact that the bias can be corrected. It's that a particular *procedure*--namely, the Maximum Likelihood estimator--does not enjoy a limiting property usually ascribed to it for such problems; namely, *consistency*. The interest lies in resolving this apparent paradox rather than in developing an improved estimator. – whuber May 26 '17 at 21:36
  • Well, I don't really se the paradox. It seems logical to me that mle's aren't always unbiased (or with a bias which tends to 0 when $n$ is large enough). Anyhow, I would gladly accept your comment as an answer. – dietervdf May 26 '17 at 22:53
  • 3
    Nobody claims MLEs are unbiased--they almost always *are* biased. The issue concerns their *consistency*, which is one of the fundamental properties frequently claimed of MLE. It sounds like a little reading into the fundamentals of MLE would resolve your questions about the Neyman-Scott paradox. – whuber May 27 '17 at 12:35

1 Answers1

2

Partially answered in comments: The problem has nothing to do with the fact that the bias can be corrected. It's that a particular procedure--namely, the Maximum Likelihood estimator--does not enjoy a limiting property usually ascribed to it for such problems; namely, consistency. The interest lies in resolving this apparent paradox rather than in developing an improved estimator. – whuber

( Well, I don't really se the paradox. It seems logical to me that mle's aren't always unbiased (or with a bias which tends to 0 when n is large enough). Anyhow, I would gladly accept your comment as an answer. – dietervdf )

Nobody claims MLEs are unbiased--they almost always are biased. The issue concerns their consistency, which is one of the fundamental properties frequently claimed of MLE. It sounds like a little reading into the fundamentals of MLE would resolve your questions about the Neyman-Scott paradox. – whuber

For other example see Difference of 'dynamic panel (Nickell) bias' and the 'incidental parameter problem' in panel data?

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467