4

I have a positive definite symmetric matrix that looks like

$$\pmatrix{A & 0 & 0 & E \\ 0 & B & 0 & F \\ 0 & 0 & C & G \\ E^\prime & F^\prime & G^\prime & D}$$

where matrices $A,B,C,D,$ are positive definite symmetric matrices. Is there a nice way to calculate the determinant? For example, the upper left block is block diagonal and its determinant is just det(A)*det(B). I want to find the determinant of the whole matrix though.

I have provided an example of such matrix using dput in R. You just need to copy and paste to R for a reproducible example

require(Matrix)
new("dgCMatrix"
    , i = c(0L, 1L, 6L, 7L, 0L, 1L, 6L, 7L, 2L, 3L, 6L, 7L, 2L, 3L, 6L, 
7L, 4L, 5L, 6L, 7L, 4L, 5L, 6L, 7L, 0L, 1L, 2L, 3L, 4L, 5L, 6L, 
7L, 0L, 1L, 2L, 3L, 4L, 5L, 6L, 7L)
    , p = c(0L, 4L, 8L, 12L, 16L, 20L, 24L, 32L, 40L)
    , Dim = c(8L, 8L)
    , Dimnames = list(NULL, NULL)
    , x = c(1.0025, 0, 0.10565, 0, 0, 1.0025, 0.00907, 0, 1.0025, 0.92905, 
0.01591, 0, 0.92905, 1.0025, 0.04458, 0, 1.0025, 0, 0.69604, 
0, 0, 1.0025, 0.00011, 0, 0.10565, 0.00907, 0.01591, 0.04458, 
0.69604, 0.00011, 1.0025, 0, 0, 0, 0, 0, 0, 0, 0, 1.0025)
    , factors = list()
)
whuber
  • 281,159
  • 54
  • 637
  • 1,101
Wis
  • 2,044
  • 14
  • 31
  • This is a use-case for cofactor expansion. Any introductory linear algebra textbook will cover it. – Sycorax Mar 10 '16 at 21:52
  • Please can you give me a more specific reference – Wis Mar 10 '16 at 22:10
  • http://www.purplemath.com/modules/minors.htm – Sycorax Mar 10 '16 at 22:11
  • Your matrix obviously is not in the form you claim it is in, because each of $A$ through $E$ must be square, and consequently of identical sizes, implying the overall matrix is square with dimensions that are a multiple of 3--but this one is an $8\times 8$ matrix. – whuber Mar 10 '16 at 22:12
  • @whuber each matrix is a 2*2 matrix, in the code the matrix I provided is bigger than the picture , I have adjusted the picture to fit the code – Wis Mar 10 '16 at 22:22
  • That still cannot be correct, for then many of your matrices are neither symmetric nor positive-definite. Clearly none of $E,F,G$ are. – whuber Mar 10 '16 at 22:23
  • @whuber I adjusted the given , I have not mentioned that E,F,G are not symmetric postive definite. I am sorry for that – Wis Mar 10 '16 at 22:26
  • When you fix up the question, then, please indicate that the matrices in the bottom row are the *transposes* of those in the last column. BTW, it's irrelevant that $E,F,G$ have non-negative entries. – whuber Mar 10 '16 at 22:27
  • @user777 It's difficult to see how your reference applies. Remember, this is a *block* matrix whose entries are themselves matrices. But a similar approach will be successful: simply row-reduce the matrix (which will succeed because $A,B,C$ are all *definite*, and therefore invertible, matrices). The formula for the determinant is then both immediate and obvious. – whuber Mar 10 '16 at 22:35
  • @whuber please could you give more details on the solution you proposed – Wis Mar 10 '16 at 23:39

1 Answers1

4

Let's talk a little about a useful way to manipulate block matrices: row reduction or Gaussian elimination. (A simple example of its utility in statistics appears at Diagonal elements of the inverted correlation matrix.)

Although this operation is well-known and described elsewhere on the Web, it is not easy to find descriptions that are sufficiently clear and general that they indicate how they might apply to block matrices. Let's fill that gap and then apply the result to get some insight into the present problem.

Let, then, $m = m_1 + m_2 + m_3 + m_4$ be the number of rows in any matrix $X$ and $n = n_1 + n_2 + n_3 + n_4$ be the number of its columns. We may write

$$X = \pmatrix{X_{11} & X_{12} & X_{13} & X_{14} \\ X_{21} & X_{22} & X_{23} & X_{24} \\ X_{31} & X_{32} & X_{33} & X_{34} \\ X_{41} & X_{42} & X_{43} & X_{44}}$$

where the $X_{ij}$ are $m_i\times n_j$ matrices (and any of the $m_i$ or $n_j$ may be zero).

Suppose you can find a matrix $U$ for which $U X_{11} = X_{31}$ (so that $U$ must be an $m_3 \times m_1$ matrix). You may use it to "eliminate" $X_{31}$ via a "block row operation":

$$\pmatrix{1_{m_1} & 0 & 0 & 0 \\ 0 & 1_{m_2} & 0 & 0 \\ -U & 0 & 1_{m_3} & 0 \\ 0 & 0 & 0 & 1_{m_4}} X = \pmatrix{X_{11} & X_{12} & X_{13} & X_{14} \\ X_{21} & X_{22} & X_{23} & X_{24} \\ 0 & X_{32} - U X_{12} & X_{33} - U X_{13} & X_{34} - U X_{14} \\ X_{41} & X_{42} & X_{43} & X_{44}}.$$

The notation "$1_k$" refers to the $k\times k$ identity matrix.

It is obvious that the determinant of the "elimination matrix" on the left hand side is $1$. Therefore, whenever $X$ is a square matrix, a row block operation does not change the determinant of the result.

The idea is to apply block row operations until the determinant of the matrix can easily be calculated. Considering the matrix $X$ in the question, we can immediately see how to eliminate the elements along the bottom row by removing multiples of the first, second, and third rows (in any order). This will change the matrix to

$$\pmatrix{A & 0 & 0 & E \\ 0 & B & 0 & F \\ 0 & 0 & C & G \\ 0 & 0 & 0 & D - E^\prime A^{-1} E - F^\prime B^{-1} F - G^\prime C^{-1} G}.$$

These operations are feasible because

  • The positive-definiteness of $A$, $B$, and $C$ implies all three are invertible.

  • The unique solution to $E^\prime = UA$ is $E^\prime A^{-1}$, the unique solution to $F^\prime = UB$ is $F^\prime B^{-1}$, and the unique solution to $G^\prime = UC$ is $G^\prime C^{-1}$.

The determinant of the resulting block triangular matrix is the product of the determinants of the blocks along the diagonal:

$$|X| = |A|\,|B|\,|C|\,|D - E^\prime A^{-1} E - F^\prime B^{-1} F - G^\prime C^{-1} G|.$$

As a check, let X be the matrix defined in the R code in the question. Let's compare the result of this formula to the direct calculation of the determinant:

A <- X[1:2, 1:2]
B <- X[3:4, 3:4]
C <- X[5:6, 5:6]
D <- X[7:8, 7:8]
E <- X[1:2, 7:8]
F <- X[3:4, 7:8]
G <- X[5:6, 7:8]
det.X <- det(X)
det.X0 <- det(A) * det(B) * det(C) * det(D - crossprod(E, solve(A, E))
                                           - crossprod(F, solve(B, F))
                                           - crossprod(G, solve(C, G)))

(det.X - det.X0) / det.X

[1] -1.926411e-16

The relative error is the size of floating point roundoff error: the two values have to be considered equal.

For small blocks, this procedure is primarily of theoretical interest. In R, it takes 50 times longer to carry out the calculation based on blocks than it does to compute the determinant directly!

whuber
  • 281,159
  • 54
  • 637
  • 1,101