Calculating the inverse square root of a square matrix $K$ is a fairly straight-forward process mathematically given that the matrix $K$ is a valid covariance matrix, ie. it is symmetric positive definite. This is the case in the calculation of canonical correlation as you concerned with the matrices $\Sigma_{YX}$ and $\Sigma_{XX}$ that are defined to be covariances matrices (more specifically $\Sigma_{YX}$ and $\Sigma_{XX}$ are cross- and auto-covariance matrices respectively). Assuming a matrix $X$ such that it has the eigendecomposition $K = VDV^T$ where $V$ are the eigenvectors of it and $D$ the diagonal matrix holding the eigenvalues associated with the eigenvectors $V$, the inverse square root of a matrix $K$ is simply $K^{-\frac{1}{2}} = V D^{-\frac{1}{2}} V^T$. Taking the square-root and then inverting the elements of $D$ is trivial.
As you correctly note, a lot of statistical programs implement higher-level functions (eg. cancorr
) so users does not need to compute a matrix inverse square root explicitly on their own. In many cases a higher-level routine is optimised by its developers in terms of speed and numerical accuracy. If you want a particular higher-level routine I would recommend you use it directly as to ensure the correct of that computation. If you want to investigate this routine in-depth though, implementing it from scratch will be invaluable.