0

If so, why would you apply another rotation after you already found the "variance-maximizing" - and in that sense optimal - rotation. Wouldn't the second rotation lead to a - again - non-variane-maximizing situation?

Please enlight me.

  • 2
    Do you know practical cases in which all the components are kept? I believe that the second rotation is most often done *after* eliminating (filtering) the smaller components (slash selecting the larger components) and is most often done for the purpose of easier representation (aligning with more meaningful conceptual axes). – Sextus Empiricus Jan 19 '18 at 13:56
  • What you mean by 'second rotation' and 'optimal rotation' is unclear. – Sextus Empiricus Jan 19 '18 at 13:57
  • PCA is just the singular value decomposition for a data matrix. The gyst of SVD is that any matrix (using your language) can be factored as a 'first rotation', followed by a change of scale followed by a second rotation. So any linear change of variables is just a rotation followed by a scale change. – meh Jan 19 '18 at 14:12
  • So far, I understand PCA with "all components" as a rotation, as my question suggests (= first & optimal rotation) – PeterPancake Jan 19 '18 at 14:14
  • There's an internal contradiction in the question. Keeping all $n$ PCs indeed corresponds to a rotation which is (almost) unique. But the "variance maximizing rotation" is far from unique: it merely identifies one direction out of an $n-1$ dimensional manifold of possible directions, thereby determining an $(n-1)(n-2)/2$ dimensional manifold of possible rotations. When $n=2$ the first PC determines both PCs, but for $n\gt 2$ the first PC does not fully determine a rotation. – whuber Jan 19 '18 at 15:57

0 Answers0