I've seen some great posts explaining PCA and why under this approach the eigenvectors of a (symmetric) correlation matrix are orthogonal. I also understand the ways to show that such vectors are orthogonal to each other (e.g. taking the cross-products of the matrix of these eigenvectors will result in a matrix with off-diagonal entries that are zero).
My first question is, when you look at the correlations of a PCA's eigenvectors, why are the off-diagonal entries of the correlation matrix non-zero (i.e. how can the eigenvectors be correlated if they are orthogonal)?
This question is not directly about PCA, but I put it in this context since that is how I ran into the issue. I am using R and specifically the psych package to run PCA.
If it helps to have an example, this post on StackOverflow has one that is very convenient and related (also in R). In this post, the author of the best answer shows that the PCA loadings (eigenvectors) are orthogonal by using Factor Congruence or cross-products. In his example, the matrix L
is the PCA loadings matrix. The only thing that is not on this link is that cor(L)
will produce the output I am asking about showing the non-zero correlations between the eigenvectors.
I am especially confused about how orthogonal vectors can be correlated after reading this post, which seems to prove that orthogonality is equivalent to lack of correlation: Why are PCA eigenvectors orthogonal and what is the relation to the PCA scores being uncorrelated?
My second question is: when the PCA eigenvectors are used to calculate PCA scores, the scores themselves are uncorrelated (as I expected)... is there a connection to my first question about this, why eigenvectors are correlated but not the scores?