2

Following this post , I think first I need to be theoretically sound .

In my theory class , I learnt that inverse of information matrix is the variance-covariance matrix of estimates . To find the variance-covariance matrix , we follow the following step :

(1) First we need to write down the log-likelihood expression which is done by dd.ML <- lme4:::devfun2(model,useSc=TRUE,signames=FALSE) in the post . Isn't it ?

(2) then we need to create information matrix which is done by hh1 <- hessian(dd.ML,pars) in the post .

(3) And the last step is to take the inverse of the information matrix which can be done by solve(hh1) . But I don't understand why is there 2 multiplied with solve(hh1) in the post vv2 <- 2*solve(hh1)

Also , why do I need to double sqrt(diag(vv2)) to get the standard errors of the variances , i.e., 2*sqrt(diag(vv2)) ? Why is not diag(solve(hh1)) all to calculate the standard errors of the variances ?

user81411
  • 731
  • 1
  • 7
  • 14
  • 1
    Note the function in step 1 is for deviance rather than log-likelihood (see e.g. [What is Deviance? (specifically in CART/rpart)](http://stats.stackexchange.com/q/6581/17230)). So there's a factor of two that's to come out at some point. – Scortchi - Reinstate Monica Aug 04 '15 at 16:22

0 Answers0