0

I am slightly confused at the last step of the Cramer Rao Lower Bound Proof in John Rice's Mathematical Statistics and Data Analysis.

In the last step, it is used that $E[T] = \theta$. I was under the impression that the expectation of an unbiased estimator $T$ was the true paramater of the distribution $\theta_0$, and therefore $\frac{\partial}{\partial \theta} E[T] = \frac{\partial}{\partial \theta} \theta_0 = 0$. In so many of the proofs in this chapter, care is taken to distinguish $\theta_0$ and $\theta$.

Could someone explain why here $E[T] = \theta$?

Cramer Rao Inequality

Cramer Rao Inequality Proof

Snowball
  • 131
  • 4
  • 1
    If $T = t(x_1, \dots, x_n)$ is an unbiased estimator for $\theta,$ then by definition $E(T) = \theta.$ Sometimes in the null and alternative hypotheses of a **test** one writes $H_0: \theta = \theta_0$ vs. $H_a: \theta \ne \theta_0.$ In that context, $\theta$ is an unknown parameter and $\theta_0$ is it's hypothetical value. In the C-R Inequality we seek to **estimate** $\theta,$ which has an unknown _constant_ value, and I see no need to refer to a value $\theta_0.$ – BruceET Feb 21 '21 at 06:33
  • 2
    Everything in the proof as provided is written in terms of $\theta$ being the ("true") parameter of the distribution, ie $E[\cdot]=E_\theta[\cdot]$ and $\text{var}(\cdot)=\text_\theta(\cdot)$. There is no need to introduce $\theta_0$ in this case. – Xi'an Feb 21 '21 at 11:34
  • The $\frac{\partial}{\partial \theta}$ comes from the definition of $Z$. It is defined as $Z = \sum_{i=1}^n \frac{\delta}{\delta \theta} \log f(X_i|\theta)|_{\theta = \theta_0}$. It is looking at the slope of the log likelihood function at the parameter $\theta$. In the definition of $Z$, we shouldn't be taking the derivative of with respect to the true parameter $\theta_0$, but rather, we should be taking the derivative with respect to $\theta$. – Snowball Mar 02 '21 at 07:24
  • The way it's written above, the definition of score is changed to taking the derivative with respect to the true parameter. This is where I'm confused. – Snowball Mar 02 '21 at 07:26
  • To clarify further, in the proof above, we rely on the fact that the expected value of the score function at the true parameter is zero. (written as $E[Z] = 0$ in the textbook). But score is defined as taking the derivative of the log likelihood function with respect to its parameter $\theta$ (not necessarily true parameter). How does the proof above make sense if we don't differentiate these two? – Snowball Mar 02 '21 at 08:09
  • Please add the [tag:self-study] tag & read its [wiki](https://stats.stackexchange.com/tags/self-study/info). Then tell us what you understand thus far, what you've tried & where you're stuck. We'll provide hints to help you get unstuck. Please make these changes as just posting your homework & hoping someone will do it for you is grounds for closing. – kjetil b halvorsen Apr 07 '21 at 23:45

0 Answers0