5

Studying asymptotics, I bumped into the concept of Observed Fisher Information, as a way to compute Fisher Information when the parameter $\theta$ is unknown. I am also aware that it is related in some ways to Maximum Likelihood Estimators but I am a bit confused.

Thus my questions are: Which is the difference between Expected and Observed Fisher Information? How the observed Fisher information is computed? And how it is used?

EDIT: From the linked pages, I have understood how the Observed Fisher Information is computed and how it may be better than the Expected one in cases of finite samples. But I have some doubts yet: is the only difference between Observed and Expected Information the presence of an estimate (possibly the MLE of $\theta$) or a known $\theta$ ? Thus, should it be used, for example, in the case of asymptotic estimations?

PhDing
  • 2,470
  • 6
  • 32
  • 57
  • 2
    Hi Alessandro. My answer [here](http://stats.stackexchange.com/a/68095/21054) contains the gist of it. Maybe this helps you to get an idea. – COOLSerdash Dec 27 '15 at 08:30
  • 1
    Thank you! The derivation as well as the computations are really well explained. Have you also any hint about the use of Obs Fisher Info? – PhDing Dec 27 '15 at 08:50
  • 3
    Whenever you calculate the Fisher inforamtion using a MLE *estimate* of the parameter, you are calculating the observed Fisher information. It's called observed, because the MLE depends of the observed data. [Here](https://www.jstor.org/stable/2335893?seq=1#page_scan_tab_contents) is an article that explains the differences and [here](http://stats.stackexchange.com/a/155020/21054) is another, similar post on this site. – COOLSerdash Dec 27 '15 at 09:14

0 Answers0