0

In PCA and LDA techniques, eigenvectors with the $k$ largest eigenvalues give principal components. However, when selecting these eigenvalues, are they to be sorted by the absolute value (regardless of sign) of the eigenvalues or just the eigenvalues themselves (with sign) ?

That is, are the eigenvectors decided by the order of dominant eigenvalues? And are thus just the dominant eigenvectors? [Dominant eigenvalues as defined here]

If yes, can you provide a simple intuitive explanation of why sign of eigenvalue does not matter.

I found some implementations doing so.

Ananda
  • 1
  • 1
  • 3
    The covariance matrix used for PCA does not have negative eigenvalues! – kjetil b halvorsen Jul 17 '21 at 20:16
  • @kjetilbhalvorsen Thanks for the comment. Could you briefly elaborate why? – Ananda Jul 17 '21 at 20:17
  • 2
    That is explained elsewhere on the site: https://stats.stackexchange.com/questions/52976/is-a-sample-covariance-matrix-always-symmetric-and-positive-definite – Arya McCarthy Jul 17 '21 at 20:27
  • 1
    [Covariances are variances.](https://stats.stackexchange.com/a/142472/919) In particular, the eigenvalues are variances (of the eigenvectors). Because variances are expectations of squares and squares (by definition) are never negative, the eigenvalues cannot be negative. – whuber Jul 17 '21 at 20:30
  • @whuber Thanks. I now understand why this would be the case for covariance matrices in PCA. However, is the same applicable for the scatter matrix ( inv(Sw).Sb ) of multi-class LDA? – Ananda Jul 17 '21 at 20:34
  • 1
    Those matrices are explicitly sums of squares, so they too must be positive semidefinite. – whuber Jul 17 '21 at 21:39

0 Answers0