2

Suppose that $X_1,...,X_n$ is an iid random sample from a Poisson distribution with mean $\theta$. I would like to prove that there exists no unbiased estimator of $\frac{1}{\theta}$.

To do so, I will let $\delta(X)$ be an estimator of $\frac{1}{\theta}$.

Then, I'd like to equate the expectation of $\delta(X)$ and $\frac{1}{\theta}$

E$[\delta(X)]$ = sum from x=0 to infinity of [ $\delta(x)$$P(X=x)$ ].

Now, my problem is that some books let $Y= \sum{X_i}$ and then have:

E$[\delta(Y)]$ = sum from y=0 to infinity of [ $\delta(y)$$P(Y=y)$ ].

How are these two methods equivalent? How is it that the sum of the X's some how is equivalent to $X_1,...,X_n$?

In other words, why is: $\delta(X_1,...,X_n) = \delta(Y)=\delta(\sum{X_i})$?

Thanks!!!

user123276
  • 1,677
  • 1
  • 19
  • 34
  • 1
    See the comments at [this post](http://stats.stackexchange.com/questions/79778/how-does-one-show-that-there-is-no-unbiased-estimator-of-lambda-1-for-a-po), which might be of some help – Glen_b Feb 26 '14 at 13:28
  • 1
    You might like to double check your title. Is there a symbol missing or something? – Glen_b Feb 26 '14 at 23:15

1 Answers1

1

Maybe these books use the fact that if Xi are iid random sample of Poisson with mean theta then Y follow a Poisson distribution with mean n*theta ? Thus estimating n*theta gives an estimation of theta.

Scratch
  • 754
  • 2
  • 6
  • 17