4

Suppose that it is known that the mean of RV $X_i$ is $\mu_i\theta$, (i = 1, 2,..., n), where $\mu_i$ are known constants, whereas $\theta$ is unknown. Let $\Sigma$ be the variance matrix of the random vector $X = \left[X_1, X_2,\ldots, X_n\right]$.

Show that the minimum-variance unbiased linear estimator of $\theta$ is given by $\hat \theta\left(x\right) = \frac{\mu^T\Sigma^{-1}x}{\mu^T\Sigma^{-1}\mu}$ where $\mu = \left[\mu_1, \mu_2, \ldots, \mu_n \right]$.

For the unbiased part, it is easily to proved. But for the minimum-variance part, I don't have the PDF of $X_i$ so it cannot be proved using traditional way. Anyone has suggestions on this problem?

Dony
  • 131
  • 2
  • Please add the `[self-study]` tag & read its [wiki](http://stats.stackexchange.com/tags/self-study/info). Then tell us what you understand thus far, what you've tried & where you're stuck. We'll provide hints to help you get unstuck. – gung - Reinstate Monica Jun 28 '15 at 22:27

1 Answers1

10

Because the estimate is subjected to linear estimator, it can be written as $\tilde{\theta} = a^T x, a \in \mathbb{R}^n$. Since it is unbiased, it must satisfy $\mathbb{E}(\tilde{\theta}) = a^T\mathbb{E}(X) = \theta$, i.e., $a^T \mu = 1$. Therefore to find the minimum-variance unbiased linear estimator, it is equivalent to solve \begin{align} & \min_{a \in \mathbb{R}^n} \text{Var}(a^Tx) = a^T\Sigma a \\ & \text{subject to }\; a^T\mu = 1. \tag{1} \end{align} This optimization problem can be easily solved by Lagrange multiplier, and the result is $\hat{\theta}(x) = \frac{\mu^T\Sigma^{-1}x}{\mu^T\Sigma^{-1}\mu}$.

Details:

To solve $(1)$, first construct the Lagrangian $$f(a) = a^T\Sigma a - \lambda (a^T\mu - 1). \tag{2}$$

Differentiate $(2)$ with respect to $a$, and set it equals to zero, we have $$2\Sigma a - \lambda \mu = 0.$$ Solve it for $a$, we have $$a = \frac{1}{2}\lambda\Sigma^{-1}\mu.\tag{3}$$ Substitute this back to the constraint $\mu^T a = 1$, we can solve $\lambda$ as follows: $$\lambda = \frac{2}{\mu^T\Sigma^{-1}\mu}.\tag{4}$$ Plug $(4)$ back to $(3)$, we obtain $$a = \frac{\Sigma^{-1}\mu}{\mu^T\Sigma^{-1}\mu}.$$ Therefore $\hat{\theta}(x) = a^Tx = \frac{\mu^T\Sigma^{-1} x}{\mu^T\Sigma^{-1}\mu}$.

Zhanxiong
  • 5,052
  • 21
  • 24