4

Fisher information for sample $x$ in experiment $(\Omega, \mathcal{F}, P_\theta)$ is defined as $$Var \left[\nabla_{\theta}\ell(\theta, x) \right] = \mathbb{E}\left[[\nabla_{\theta} \ell(\theta, x)] [\nabla_{\theta}\ell(\theta, x)]^T\right] $$ where $\ell(\theta, x) = \log(f(x|\theta)$.

I do not understand how this definition is applied to a very basic and well known example: Let $x \sim U(0,\theta)$. In this case the probability density of $x$ is $$ f(x_i|\theta) = \begin{cases} \frac{1}{\theta} & x\in [0, \theta]\\ 0 & \text{otherwise} \end{cases} $$

It looks to me that the density it is not differentiable with respect to $\theta$ at $\theta = x$ , consequently $\nabla_{\theta}\ell(\theta, x)$ is not defined for $\theta = x$.

I know this is a very basic example, but for some reason I couldn't find a full derivation of fisher information for this case, all I see is the stated result that it is $\frac{1}{\theta^2}$.

I would appreciate anyone pointing out to what do I miss here... Thanks!

them
  • 668
  • 4
  • 15
  • 1
    Welcome to CV. There is an extended discussion of Fisher Information in this thread http://stats.stackexchange.com/questions/196576/what-kind-of-information-is-fisher-information. – Mike Hunter Mar 05 '16 at 20:01

1 Answers1

1

Answered in comments, copied below:

The derivative is not defined for a single value of x, hence it is defined almost everywhere, which is all you need for the variance computation. – Xi'an

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
  • 1
    However, the information is not particularly meaningful in the Uniform context: for instance, the information brought by an n-sample is $n^2$ the information brought by a 1-sample, the information is not the variance of the score, does not appear in a second order approximation to the log-likelihood since the MLE does not cancel the derivative, &tc. – Xi'an Jan 25 '19 at 15:21