I understand the idea behind factor analysis, but everything I read on the topic seems to very vaguely cover the topic of eigenvalues and eigenvectors
Whats the correct way to understand eigenvalues and eigenvectors in factor analysis? What are they? and how are they related to factors, communalities, variance capture, and factor loadings?
I'm not so much interested in how we decompose a matrix into eigenvalues and eigenvectors, but rather how we interpret them in the context of factor analysis
This becomes especially important when employing the Kaiser rule (eigenvalues > 1) and looking at scree plots (where the Y axis is eigenvalue)
Ive seen similar questions about this (Eigenvalue vs Variance and What is the rationale behind the "eigenvalue > 1" criterion in factor analysis or PCA?), but the answers dont really explain the link between the 2 concepts and the ideas/language of factor analysis
EDIT:
I saw on this site that "the eigenvectors of R (multiplied by their eigenvalues) are known as the factor loadings and are literally the correlations of the each variable in X with an underlying factor or principal component" ... Is there an intuitive way to understand why this is the case?