8

I fitted an hyperbolic distribution to my data with the hyperbFit(mydata,hessian=TRUE) command (package HyperbolicDist). The hessian looks like:

> hyperbfitmymodel$hessian
            hyperbPi      lZeta     lDelta            mu
hyperbPi   536.61654  -23.82800   25.62345   26153.16561
lZeta      -23.82800  250.74196 -261.20570     -35.58481
lDelta      25.62345 -261.20570  272.77771     182.75927
mu       26153.16561  -35.58481  182.75927 2028904.75586

Now I want to calculate the variance-covariance matrix of the parameter estimates, according to this page 2:

The asymptotic covariance matrix of $\hat{\theta}$ is given by the inverse of the negative of the Hessian matrix evaluated at $\hat{\theta}$.

I therefore calculate:

solve(-hyperbfitalv$hessian)

which gives

         hyperbPi         lZeta        lDelta            mu
hyperbPi -5.113433e-03 -0.0091511819 -0.0083271877  6.650321e-05
lZeta    -9.151182e-03 -1.6617499980 -1.5905496996  2.320893e-04
lDelta   -8.327188e-03 -1.5905496996 -1.5261031428  2.169113e-04
mu        6.650321e-05  0.0002320893  0.0002169113 -1.365591e-06

This looks clearly wrong to me, because there are negative values for the variance, but a variance cannot be negative? The covariance yes, but not the variance?

EDIT: The complete output of hyperbFit(mydata,hessian=TRUE):

Data:     mydata
Parameter estimates:
       pi       zeta      delta         mu  
 0.090747   0.204827   0.002035  -0.002494  
Likelihood:         756.911 
Method:             Nelder-Mead 
Convergence code:   0 
Iterations:         365 

2nd EDIT: If I use solve(hyperbfitalv$hessian) I get

             hyperbPi         lZeta        lDelta            mu
hyperbPi  5.113433e-03  0.0091511819  0.0083271877 -6.650321e-05
lZeta     9.151182e-03  1.6617499980  1.5905496996 -2.320893e-04
lDelta    8.327188e-03  1.5905496996  1.5261031428 -2.169113e-04
mu       -6.650321e-05 -0.0002320893 -0.0002169113  1.365591e-06

3rd EDIT: The output of summary(hyperbfitalv):

Data:      mydata
Parameter estimates:
       pi          zeta         delta         mu    
    0.090747     0.204827     0.002035    -0.002494 
  ( 0.071508)  ( 0.264040)  ( 0.002514)  ( 0.001169)
Likelihood:         756.911 
Method:             Nelder-Mead 
Convergence code:   0 
Iterations:         365 

4th EDIT: Ok, this is the hessian of pi, log(zeta), log(delta), and mu but how can I get the hessian of pi, zeta, delta and mu?

Jen Bohold
  • 1,410
  • 2
  • 13
  • 19
  • This hessian matrix you've shown is positive definite. Therefore, either the optimization algorithm did not converge or the converged point is a local _minimum_, not a maximum. I suspect this is a computing error. I suggest supplying the rest of the code you've entered off screen and posting this on stack overflow. – Macro Aug 16 '13 at 17:27
  • @Macro Thanks for the answer, but there is no other code? I just used hyperbFit(mydata,hessian=TRUE) and that's it? Before I load the package HyperbolicDist. But nothing more? I added the complete output, not only the hessian. As you can see the convergence code is 0. In the manual of the command "optim" it says: "0 indicates successful completion". (I used default values so in default it is Nelder-Mead of the optim command.) – Jen Bohold Aug 16 '13 at 17:33
  • Hi @Jen - there must be some information missing because, according to the [help file](http://127.0.0.1:10252/library/GeneralizedHyperbolic/html/hyperbFit.html), `hyperbFit` doesn't even take an argument called `hessian`. I get a warning when I try to pass that argument. You calculate the hessian by an appropriate call to the `summary` function, e.g. `summary(fittedmodel, hessian=TRUE)`. You may try refitting the model with different starting values because I don't think the model converged. Again, I think this is a computing issue so I suggest deleting this and re-posting on stackoverflow. – Macro Aug 16 '13 at 17:37
  • @Macro http://help.rmetrics.org/HyperbolicDist/hyperbFit.html of course there is an hessian argument? Your link does not work for me. I did not use a summary function up to now? I just used the code as it is and it works with the commands and especially with the hessian? – Jen Bohold Aug 16 '13 at 17:39
  • @Macro see Arguments "hessian Logical. If TRUE the value of the hessian is returned." As I wrote I use the HyperbolicDist package. – Jen Bohold Aug 16 '13 at 17:40
  • @Macro Ah, but your other hint was useful, in the optim manual it says: "By default optim performs minimization". So should I just use solve(hyperbfitalv$hessian)? Or is it minimizing somehow (-2)*log(likelihood) and then the HALF of the hessian is to be used? – Jen Bohold Aug 16 '13 at 17:43
  • Oh, I was using the wrong package (GeneralizedHyperbolic). Oops. When you invert the hessian it returns and take the square root of the diagonal, some of the values match the standard errors exactly (not all of them, though, which is odd), suggesting it may be returning the negative hessian. As I said, the matrix you gave can't be the hessian of a maximized log-likelihood. See [the second derivative test](http://en.wikipedia.org/wiki/Second_partial_derivative_test#Functions_of_many_variables) for more information. I still contend this is a computing issue, not a statistical one. Best of luck. – Macro Aug 16 '13 at 17:54
  • @Macro ok, assume it is the minimized log-likelihood hessian matrix, does it then pass the second derivative test? – Jen Bohold Aug 16 '13 at 17:55
  • Yes, this being the negative hessian would make everything make sense but it's not clear why the standard errors in the output (when you type `summary(fittedmodel)`) don't match the diagonal entries of the square root of the hessian. Perhaps it's using some other method for getting its standard errors. Or, it could be a bug. Who knows. Take it over to stack overflow. Best – Macro Aug 16 '13 at 17:59
  • How do you know the values of summary(hyperbfitalv)? I added it. But it's true what you said, the square root of the hessians diagonal are not equal to the standard errors? – Jen Bohold Aug 16 '13 at 17:59
  • I was using the example in the help file. I was finding that some of the entries of sqrt(H^-1) matched the standard errors but others didn't. I suspect this is a computing issue/bug with the package but I guess I could be wrong. You can either move it yourself (e.g. by deleting it here and re-posting over there) or have a mod do it, which could take a while. Or, you could leave it here. It's up to you. Either way, we need to stop this comment discussion. If you move it to chat I may join you later but I can't promise. Cheers. – Macro Aug 16 '13 at 18:08
  • @Macro `summary.hyperbFit` uses the delta method to obtain the standard errors because the Hessian is not parameterized in terms of the actual parameters: see http://help.rmetrics.org/HyperbolicDist/summary.hyperbFit.html. – whuber Aug 16 '13 at 18:08
  • Perfect. There's your answer. The hessian it returns is actually the negative hessian and whuber has explained the discrepancy with the standard errors (thanks). Jen, if you're satisfied with this answer perhaps you can post it as an answer below (or possibly delete the question since it's on-topicness is debatable). – Macro Aug 16 '13 at 18:11
  • I am not satisfied, because I need one more answer: For what parameterization is this hessian now? For the zeta,pi,delta, mu? No, because then the standard errors would have to be the same (to the square roots of the hessian matrix)? – Jen Bohold Aug 16 '13 at 18:16
  • 4
    OK, the manual says: It is the hessian of " pi, log(zeta), log(delta), and mu", but how can I get the hessian of pi,zeta,delta and mu? So without the logs? – Jen Bohold Aug 16 '13 at 18:18
  • @Macro Thanks for your help, ok. I will do a follow-up post and answer this question by myself. – Jen Bohold Aug 16 '13 at 18:24

2 Answers2

6

Optim in default setting is doing minimization, see the manual:

By default optim performs minimization

So the output is already the negative hessian.

It should be further noted that:

Because the parameters in the call to the optimiser are pi, log(zeta), log(delta), and mu, the delta method is used to obtain the standard errors for zeta and delta.

Source here.

Jen Bohold
  • 1,410
  • 2
  • 13
  • 19
1

Seconded to @Jen 's answer. In fact the 5th line in the result of summary(hyperbfitalv) are SE's. They are indeed the square root of the diagonal elements of inverse-hessian solve(hyperbfitalv$hessian).

>>> sqrt(1.365591e-6)#for pi
0.0011685850418347824
>>> sqrt(5.113433e-3)#for mu
0.071508272248740568
>>> sqrt(1.5261031428)*0.002035#for delta
0.0025139483860139073
>>> sqrt(1.6617499980)*0.204827#for zeta
0.26404019669949413

Note that lZeta and lDelta are in fact log(Zeta) and log(Delta). Cheers!

CT Zhu
  • 318
  • 2
  • 9