1

I am studying the Canonical correlation analysis(CCA)

the formula https://en.wikipedia.org/wiki/Canonical_correlation

is involving the -1/2 degree of a matrix. My question is:

how can i do the computation by using programme? Or in practice no one will actually care about the computation, since statistical programme ,like sas (proc cancorr ),will have the built in function on it

  • I am not sure about the question. Looking for something like [this](http://docs.scipy.org/doc/scipy-0.13.0/reference/generated/scipy.linalg.sqrtm.html)? – IcannotFixThis Jul 12 '16 at 07:27
  • I just want to point out that in general a matrix doesn't have "the" one and only square root. – Qaswed Jul 12 '16 at 09:04
  • 1
    A side note. [Here is](http://stats.stackexchange.com/q/77287/3277) the CCA algorithm how it is actually computed in SPSS, for example. It uses Cholesky root instead of eigendecomposition (which is needed to take a root of a matrix). It probably is faster (but I didn't compare). – ttnphns Jul 12 '16 at 09:40
  • 1
    @ttnphns: Nice addition (+1); MATLAB (somewhat unsurprisingly) uses a combination of QR and then SVD to produce its final estimates. This is probably the most robust option numerically; it avoids computing the covariance matrix altogether (and thus working with a matrix having a higher condition number). – usεr11852 Jul 12 '16 at 09:52

1 Answers1

2

Calculating the inverse square root of a square matrix $K$ is a fairly straight-forward process mathematically given that the matrix $K$ is a valid covariance matrix, ie. it is symmetric positive definite. This is the case in the calculation of canonical correlation as you concerned with the matrices $\Sigma_{YX}$ and $\Sigma_{XX}$ that are defined to be covariances matrices (more specifically $\Sigma_{YX}$ and $\Sigma_{XX}$ are cross- and auto-covariance matrices respectively). Assuming a matrix $X$ such that it has the eigendecomposition $K = VDV^T$ where $V$ are the eigenvectors of it and $D$ the diagonal matrix holding the eigenvalues associated with the eigenvectors $V$, the inverse square root of a matrix $K$ is simply $K^{-\frac{1}{2}} = V D^{-\frac{1}{2}} V^T$. Taking the square-root and then inverting the elements of $D$ is trivial.

As you correctly note, a lot of statistical programs implement higher-level functions (eg. cancorr) so users does not need to compute a matrix inverse square root explicitly on their own. In many cases a higher-level routine is optimised by its developers in terms of speed and numerical accuracy. If you want a particular higher-level routine I would recommend you use it directly as to ensure the correct of that computation. If you want to investigate this routine in-depth though, implementing it from scratch will be invaluable.

usεr11852
  • 33,608
  • 2
  • 75
  • 117
  • 1
    There has been research into alternative methods, e.g. those based on Newton-Raphson. See [this](http://books.google.com/books?hl=en&id=2Wz_zVUEwPkC&pg=PA133) for instance. – J. M. is not a statistician Jul 23 '16 at 14:17
  • +1 @J.M.: Yes, of course. I am focused on the general principal here but clearly iterative solutions can also be employed. – usεr11852 Jul 23 '16 at 16:33