1

I am currently studying the Cramer-Rao lower bound. My notes say the following:

Theorem: Cramer-Rao lower bound
Let $Y_1, \dots, Y_n$ have a joint distribution $f_\theta (\mathbf{y})$, where $f_\theta (\mathbf{y})$ satisfies the following two regularity conditions:

  1. The support supp$f_\theta (\mathbf{y})$ does not depend on $\theta$;
  2. for any statistic $T = T(Y_1, \dots, Y_n)$ with a finite $E_\theta \lvert T \rvert$, integration w.r.t. $\mathbf{y}$ and differentiation w.r.t. $\theta$ in calculating $(E[T])_\theta^\prime$ can be interchanged, that is, $$(E[T])_\theta^\prime = \dfrac{d}{d\theta} \int T(\mathbf{y})f_\theta (\mathbf{y}) \ d\mathbf{y} = \int T(\mathbf{y})f_\theta^\prime (\mathbf{y}) \ d\mathbf{y}$$ Let $T$ be an unbiased estimator for $\theta$, with a finite variance. Then $$\text{Var}(T) \ge \dfrac{1}{I(\theta)},$$ where $I(\theta) = E[(\ln(f_\theta(\mathbf{Y})_\theta^\prime)^2]$ is called the Fisher information number.
    More generally, if $T$ is an unbiased estimator for $g(\theta)$, where $g(\cdot)$ is differentiable, then, $$\text{Var}(T) \ge \dfrac{(g^\prime(\theta))^2}{I(\theta)}$$
    Proof: Cramer-Rao lower bound
    Since $T$ is an unbiased estimator for $g(\theta)$, we have $$g(\theta) = ET = \int T(\mathbf{y}) f_\theta(\mathbf{y}) \ d\mathbf{y},$$ and, under regularity conditions, $$g^\prime(\theta) = (E[T])_\theta^\prime = \dfrac{d}{d \theta} \int T(\mathbf{y}) f_\theta(\mathbf{y}) \ d\mathbf{y} = \int T(\mathbf{y}) f_\theta^\prime (\mathbf{y}) \ d\mathbf{y} = \int T(\mathbf{y})(\ln \left[ f_\theta (\mathbf{y})\right]_\theta^\prime f_\theta (\mathbf{y}) \ d\mathbf{y} = E[T(\mathbf{Y})(\ln [f_\theta(\mathbf{Y})])_\theta^\prime ]$$

I am confused by this part:

Since $T$ is an unbiased estimator for $g(\theta)$, we have $$g(\theta) = ET = \int T(\mathbf{y}) f_\theta(\mathbf{y}) \ d\mathbf{y}$$

Why does $T$ being an unbiased estimator for $g(\theta)$ imply that $g(\theta) = ET = \int T(\mathbf{y}) f_\theta(\mathbf{y}) \ d\mathbf{y}$?


EDIT

From https://en.wikipedia.org/wiki/Bias_of_an_estimator#Definition :

"... or equivalently, if the expected value of the estimator matches that of the parameter." In this case, the expected value of the estimator does not match the parameter, but rather a function of the parameter. So there seems to be some disconnect here between what's in these notes and the Wikipedia definition of an unbiased estimator.

The Pointer
  • 1,064
  • 13
  • 35
  • 1
    What definition(s) and formula(s) do you know for expectation? – whuber Jul 19 '21 at 13:38
  • 3
    Re the edit: you misinterpret Wikipedia. What is being estimated here is $g(\theta).$ *A fortiori,* the estimator is unbiased if its expectation equals $g(\theta).$ Indeed, when $g$ is a one-to-one function, $g(\theta)$ *is* a parameter. – whuber Jul 19 '21 at 14:52
  • 1
    @whuber Ok, that makes sense. I think I was confused because $\theta$ itself is usually the parameter being estimated, but, in this case, the "parameter" is $g(\theta)$. – The Pointer Jul 19 '21 at 14:57

1 Answers1

3

This is the definition of unbiasedness.

Shang Zhang
  • 431
  • 1
  • 6
  • Care to elaborate? https://en.wikipedia.org/wiki/Bias_of_an_estimator#Definition – The Pointer Jul 19 '21 at 13:22
  • 1
    The Wiki already mentioned, that being unbiased means the expected value of the estimator being equal to the true (unknown) parameter value. Maybe you read the text wrong? It is $T$ (a statistic), not $Y$ (the raw data), that is the unbiased estimator. – Shang Zhang Jul 19 '21 at 13:34
  • See my edit. "... or equivalently, if the expected value of the estimator matches that of the parameter." In this case, the expected value of the estimator does not match the parameter, but rather a function of the parameter. So there seems to be some disconnect here between what's in these notes and the Wikipedia definition of an unbiased estimator. – The Pointer Jul 19 '21 at 14:27