8

According to the Rao-Blackwell theorem, if statistic $T$ is a sufficient and complete for $\theta$, and $E(T)=\theta$, then $T$ is a uniformly minimum-variance unbiased estimator (UMVUE).

I am wondering how to justify that an unbiased estimator is a UMVUE:

  1. if $T$ is not sufficient, can it be a UMVUE?
  2. if $T$ is not complete, can it be a UMVUE?
  3. If $T$ is not sufficient or complete, can it be a UMVUE?
whuber
  • 281,159
  • 54
  • 637
  • 1,101
Alex Brown
  • 81
  • 1
  • 3
  • 1
    Should the last one "if $T$ is **not** sufficient **or** complete" perhaps be "if T is **neither** sufficient **nor** complete" (if you mean both conditions hold simultaneously)? – Richard Hardy Aug 16 '15 at 19:43
  • In 2. If $T$ is not complete, then it is *an* MVUE but you do need the completeness if you are to attach the letter U to it :) – JohnK Aug 16 '15 at 23:41
  • A [necessary-sufficient condition](http://pages.stat.wisc.edu/~doksum/STAT709/n709-31.pdf) for an unbiased estimator (with finite second moment) to be UMVUE is that it must be uncorrelated with every unbiased estimator of zero. – StubbornAtom Jan 30 '20 at 21:21

2 Answers2

4

On Uniformly Minimum Variance Unbiased Estimation when no Complete Sufficient Statistics Exist by L. Bondesson gives some examples of UMVUEs which are not complete sufficient statistics, including the following one:

Let $X_1, \ldots, X_n$ be independent observations of a random variable $X = \mu + \sigma Y$, where $\mu$ and $\sigma$ are unknown, and $Y$ is gamma distributed with known shape parameter $k$ and known scale parameter $\theta$. Then $\bar{X}$ is the UMVUE of $E(X) = \mu + k\theta\sigma$. However, when $k \neq 1$ then there is no complete sufficient statistic for $(\mu, \sigma)$.

David R
  • 176
  • 1
  • 14
3

Let us show that there can be a UMVUE which is not a sufficient statistic.

First of all, if the estimator $T$ takes (say) value $0$ on all samples, then clearly $T$ is a UMVUE of $0$, which latter can be considered a (constant) function of $\theta$. On the other hand, this estimator $T$ is clearly not sufficient in general.

It is a bit harder to find a UMVUE $Y$ of the "entire" unknown parameter $\theta$ (rather than a UMVUE of a function of it) such that $Y$ is not sufficient for $\theta$. E.g., suppose the "data" are given just by one normal r.v. $X\sim N(\tau,1)$, where $\tau\in\mathbb{R}$ is unknown. Clearly, $X$ is sufficient and complete for $\tau$. Let $Y=1$ if $X\ge0$ and $Y=0$ if $X<0$, and let
$\theta:=\mathsf{E}_\tau Y=\mathsf{P}_\tau(X\ge0)=\Phi(\tau)$; as usual, we denote by $\Phi$ and $\varphi$, respectively, the cdf and pdf of $N(0,1)$.
So, the estimator $Y$ is unbiased for $\theta=\Phi(\tau)$ and is a function of the complete sufficient statistic $X$. Hence, $Y$ is a UMVUE of $\theta=\Phi(\tau)$.

On the other hand, the function $\Phi$ is continuous and strictly increasing on $\mathbb{R}$, from $0$ to $1$. So, the correspondence $\mathbb{R}\ni\tau=\Phi^{-1}(\theta)\leftrightarrow\theta=\Phi(\tau)\in(0,1)$ is a bijection. That is, we can re-parametirize the problem, from $\tau$ to $\theta$, in a one-to-one manner. Thus, $Y$ is a UMVUE of $\theta$, not just for the "old" parameter $\tau$, but for the "new" parameter $\theta\in(0,1)$ as well. However, $Y$ is not sufficient for $\tau$ and therefore not sufficient for $\theta$. Indeed, \begin{multline*} \mathsf{P}_\tau(X<-1|Y=0)=\mathsf{P}_\tau(X<-1|X<0)=\frac{\mathsf{P}_\tau(X<-1)}{\mathsf{P}_\tau(X<0)} \\ =\frac{\Phi(-\tau-1)}{\Phi(-\tau)} \sim\frac{\varphi(-\tau-1)/(\tau+1)}{\varphi(-\tau)/\tau}\sim\frac{\varphi(-\tau-1)}{\varphi(-\tau)}=e^{-\tau-1/2} \end{multline*} as $\tau\to\infty$; here we used the known asymptotic equivalence $\Phi(-\tau)\sim\varphi(-\tau)/\tau$ as $\tau\to\infty$, which follows by the l'Hospital rule. So, $\mathsf{P}_\tau(X<-1|Y=0)$ depends on $\tau$ and hence on $\theta$, which shows that $Y$ is not sufficient for $\theta$ (whereas $Y$ is a UMVUE for $\theta$).

  • If the estimator $T$ always takes the value $0$, how can it be unbiased? – Xi'an Jan 15 '18 at 11:24
  • 1
    By definition, $T$ is an unbiased estimator of a function $q(\theta)$ of the parameter $\theta$ if $E_\theta T=q(\theta)$ for all values of $\theta$. So, if $q(\theta)=0$ for all $\theta$, then of course $T=0$ will be an unbiased estimator of this $q(\theta)$. And this is what I said: that $T=0$ is clearly an unbiased estimator of the constant zero function of the parameter. – Iosif Pinelis Jan 15 '18 at 15:23
  • OK, thanks, I had missed the fact that you were "estimating" a constant function! – Xi'an Jan 15 '18 at 15:26