I am currently studying factor analysis, and am told the following:
We have a random sample $\mathbb{X} = [\mathbf{X}_1 \ \mathbf{X}_2 \ \dots \ \mathbf{X}_n] \sim \text{Sam}(\overline{\mathbf{X}}, S)$ of rank $r$. For $k \le r$, a (sample) $k$-factor model of $\mathbb{X}$ is $$\mathbb{X} = \hat{A} \mathbb{F} + \overline{\mathbf{X}} + \mathcal{R}$$ The common factors $\mathbb{F} = [\mathbf{F}_1 \ \mathbf{F}_2 \ \dots \ \mathbf{F}_n]$ are $k$-dimensional random vectors which have a zero mean and identity covariance matrix.
The sample factor loadings $\hat{A}$ is the $d \times k$ matrix.
The specific factor is the $d \times n$ matrix $\mathcal{R}$ of zero mean random vectors and a diagonal covariance matrix $\Omega$.
The common and specific factors are uncorrelated.
I am told that we can apply "rotations" to the factor loadings, such as "orthogonal" and "oblique" rotations. I am then told that, although the common factors and the specific factor are uncorrelated, applying these "rotations" to the factor loadings makes them correlated. Would people please explain this? Furthermore, it seems to me that we would not want correlation in factor analysis, since that would ruin/pollute whatever signal we are getting from performing the method in the first place, no?