13

In the context of best linear unbiased predictors (BLUP), Henderson specified the mixed-model equations (see Henderson (1950): Estimation of Genetic Parameters. Annals of Mathematical Statistics, 21, 309-310). Let us assume the following mixed effects model:

$y = X\beta+Zu+e$

where $y$ is a vector of n observable random variables, $\beta$ is a vector of $p$ fixed effects, $X$ and $Z$ are known matrices, and $u$ and $e$ re vectors of $q$ and $n$ random effects such that $E(u) = 0$ and $E(e) = 0$ and

$ Var \begin{bmatrix} u \\ e \\ \end{bmatrix} = \begin{bmatrix} G & 0 \\ 0 & R \\ \end{bmatrix}\sigma^2$

where $G$ and $R$ are known positive definite matrices and $\sigma^2$ is a positive constant.

According to Henderson (1950), the BLUP estimates of $\hat {\beta}$ of $\beta$ and $\hat {u}$ of $u$ are defined as solutions to the following system of equation:

$X'R^{-1}X\hat {\beta}+X'R^{-1}Z\hat {u} = X'R^{-1}y$

$Z'R^{-1}X\hat {\beta}+(Z'R^{-1}Z + G^{-1})\hat {u} = Z'R^{-1}y$

(Also see: Robinson (1991): That BLUP is a good thing: the estimation of random effects (with discussion). Statistical Science, 6:15–51).

I have not found any derivation of this solution but assume that he approached it as follows:

$(y - X\beta - Zu)'V^{-1}(y - X\beta - Zu)$

where $V = R + ZGZ'$. Hence the solutions should therefore be

$X'V^{-1}X\hat {\beta} + X'V^{-1}Z\hat {u} = X'V^{-1}y$

$Z'V^{-1}X\hat {\beta} + Z'V^{-1}Z\hat {u} = Z'V^{-1}y$.

We also know that $V^{-1} = R^{-1} - R^{-1}Z(G^{-1}+Z'R^{-1}Z)Z'R^{-1}$.

However, ho to proceed to arrive at the mixed-model equations?

Robert Long
  • 53,316
  • 10
  • 84
  • 148
DomB
  • 459
  • 2
  • 10

2 Answers2

10

One approach is to form the log-likelihood and differentiate this with respect to the random effects $\mathbf{u}$ and set this equal to zero, then repeat, but differentiate with respect to the fixed effects $\boldsymbol{\beta}$.

With the usual normality assumptions we have:

$$ \begin{align*} \mathbf{y|u} &\sim \mathcal{N}\mathbf{(X\beta + Zu, R)} \\ \mathbf{u} &\sim \mathcal{N}(\mathbf{0, G}) \end{align*} $$ where $\mathbf{y}$ is the response vector, $\mathbf{u}$ and $\boldsymbol{\beta}$ are the random effects and fixed effects coefficient vectors $\mathbf{X}$ and $\mathbf{Z}$ are model matrices for the fixed effects and random effects respectively. The log-likelihood is then:

$$ -2\log L(\boldsymbol{\beta,}\mathbf{u}) = \log|\mathbf{R}|+(\mathbf{y - X\boldsymbol{\beta} - Zu})'\mathbf{R}^{-1}(\mathbf{y - X\boldsymbol{\beta} - Zu}) +\log|\mathbf{G}|+\mathbf{u'G^{-1}u} $$ Differentiating with respect to the random and fixed effects: $$ \begin{align*} \frac{\partial \log L}{\partial \mathbf{u}} &= \mathbf{Z'R^{-1}}(\mathbf{y - X\boldsymbol{\beta} - Zu}) - \mathbf{G^{-1}u} \\ \frac{\partial \log L}{\partial \boldsymbol{\beta}} &= \mathbf{X'R^{-1}}(\mathbf{y - X\boldsymbol{\beta} - Zu}) \end{align*} $$ After setting these both equal to zero, with some minor re-arranging, we obtain Henderson's mixed model equations:

$$ \begin{align*} \mathbf{Z'R^{-1}}\mathbf{y} &= \mathbf{Z'R^{-1}X\boldsymbol{\beta}} + \mathbf{u(Z'R^{-1}Z+G^{-1})} \\ \mathbf{X'R^{-1}}\mathbf{y} &= \mathbf{X'R^{-1}X\boldsymbol{\beta}} + \mathbf{X'R^{-1}Zu} \end{align*} $$

Ufos
  • 173
  • 1
  • 5
Robert Long
  • 53,316
  • 10
  • 84
  • 148
  • Thanks! Makes perfect sense. Just a follow-up question. Since we assume normality of both _u_ and _e_, we can form the log likelihood by using the joint density of _u_ and _e_. Within this joint density, we have a variance-covariance matrix with _G_ and _R_ on the diagonal. If we now assume a three-level model with _H_ being the variance-covariance matrix of the third-level random effect, then we would have _G_, _R_, and _H_ on the diagonal? Cheers – DomB Jan 09 '19 at 19:08
  • You are very welcome. Please post this follow-up as a new question, perhaps with a link to this question. – Robert Long Jan 09 '19 at 22:08
  • Done! https://stats.stackexchange.com/questions/386474/what-is-the-joint-density-function-of-3level-mixed-effects-model – DomB Jan 10 '19 at 05:55
  • Should it not be $X'R^{-1}Zu$ on the last line? Similarly for the above? – JDoe2 Jul 23 '20 at 13:29
  • @JDoe2 I don't think so, but I'm a bit rusty on this now. It seems to match [this](https://www.rdocumentation.org/packages/synbreed/versions/0.12-9/topics/MME) doesn't it ? – Robert Long Jul 23 '20 at 13:45
  • @JDoe2 The final equations look right to me but there could be an error in the differentiation. – Robert Long Jul 23 '20 at 14:04
  • You have $uX'R^{-1}Z$. If $X$ has dimensions $(n \times p)$ and $Z$ has dimensions $(n \times q)$ then $u$ must have dimension $(q \times 1)$. But looking at $uX'R^{-1}Z$ the dimensions of the first matrix multiplication don't make sense then. – JDoe2 Jul 23 '20 at 14:07
  • Right. $X'R^{-1}Zu$ actually does make sense ! – Robert Long Jul 23 '20 at 14:55
  • I believe the same should be true for the line above as well... i.e. $(Z'R^{-1}Z+G^{-1})u$ rather than $u(Z'R^{-1}Z+G^{-1})$... I think – JDoe2 Jul 26 '20 at 18:47
  • Looks like you are right again. Maybe you should just edit it :) – Robert Long Jul 26 '20 at 18:50
  • 1
    I'll chim in: I can verify the result. It's also the same as in McCulloch and Searle (eq 9.35). They leave the derivation as an exercise. – Ufos May 23 '21 at 18:10
3

For a very simple derivation, without making any assumption on normality, see my paper

A. Neumaier and E. Groeneveld, Restricted maximum likelihood estimation of covariances in sparse linear models, Genet. Sel. Evol. 30 (1998), 3-26.

Essentially, the mixed model $$y=X\beta+Zu+\epsilon,~~ Cov(u)=\sigma^2 G,~~ Cov(\epsilon)=\sigma^2 D,$$ where $u$ and $\epsilon $ have zero mean and wlog $G=LL^T$ and $D=MM^T$, is equivalent to the assertion that with $x=\pmatrix{\beta \cr u}$ and $P=\pmatrix{M & 0\cr 0 &L}$, $E=\pmatrix{I \cr 0}$, $A=\pmatrix{X & Z \cr 0 & I}$, the random vector $P^{-1}(Ey-Ax)$ has zero mean and covariance matrix $\sigma^2 I$. Thus the best linear unbiased predictor is given by the solution of the normal equations for the overdetermined linear system $P^{-1}Ax=P^{-1}Ey$. This gives Henderson's mixed model equations.

Arnold Neumaier
  • 364
  • 1
  • 8