I know that in a regression situation, if you have a set of highly correlated variables this is usually "bad" because of the instability in the estimated coefficients (variance goes toward infinity as determinant goes towards zero).
My question is whether this "badness" persists in a PCA situation. Do the coefficients/loadings/weights/eigenvectors for any particular PC become unstable/arbitrary/non-unique as the covariance matrix becomes singular? I am particularly interested in the case where only the first principal component is retained, and all others are dismissed as "noise" or "something else" or "unimportant".
I don't think that it does, because you will just be left with a few principal components which have zero, or close to zero variance.
Easy to see this isn't the case in the simple extreme case with 2 variables - suppose they are perfectly correlated. Then the first PC will be the exact linear relationship, and the second PC will be perpindicular to the first PC, with all PC values equal to zero for all observations (i.e. zero variance). Wondering if its more general.