Following this post , I think first I need to be theoretically sound .
In my theory class , I learnt that inverse of information matrix is the variance-covariance matrix of estimates . To find the variance-covariance matrix , we follow the following step :
(1) First we need to write down the log-likelihood expression which is done by dd.ML <- lme4:::devfun2(model,useSc=TRUE,signames=FALSE)
in the post . Isn't it ?
(2) then we need to create information matrix which is done by hh1 <- hessian(dd.ML,pars)
in the post .
(3) And the last step is to take the inverse of the information matrix which can be done by solve(hh1)
. But I don't understand why is there 2 multiplied with solve(hh1)
in the post vv2 <- 2*solve(hh1)
Also , why do I need to double sqrt(diag(vv2))
to get the standard errors of the variances , i.e., 2*sqrt(diag(vv2))
? Why is not diag(solve(hh1))
all to calculate the standard errors of the variances ?