I'm reading a paper in which they use the eigenvectors of the inverse Hessian of a continuous probability distribution to characterize dimensions along which the distribution is most and least constrained. I'm having some trouble with the intuition behind this. Where do the eigenvectors of the inverse Hessian point in terms of dimensions along which we have most and least variance? Are the eigenvectors/eigenvalues of the inverse Hessian related to those of the Hessian?
Asked
Active
Viewed 776 times
1 Answers
10
"Are the eigenvectors/eigenvalues of the inverse Hessian related to those of the Hessian?"
Yes. The hessian is a symmetric matrix which can be diagonalized as
$H=Q\Lambda Q^{T}$
where $Q$ is an orthogonal matrix whose columns are eigenvectors of $H$ and $\Lambda$ is a diagonal matrix with the eigenvalues of $H$ on the diagonal. The inverse is
$H^{-1}=Q \Lambda^{-1} Q^{T}$
where $\Lambda^{-1}$ is a diagonal matrix with the reciprocals of the original eigenvalues on its diagonal. This means that the eigenvectors of $H$ are also eigenvectors of $H^{-1}$, with eigenvalues that are the reciprocals of the eigenvalues of $H$.

Brian Borchers
- 5,015
- 1
- 18
- 27