0

Given an $m \times n$ data matrix $X$, the SVD of its covariance matrix $$C = XX^T = ULU^T$$ provides the orthogonal unit vectors that maximize the variance in these directions.

In the case of an $m \times p$ data matrix $X$ and an $n \times p$ data matrix $Y$, how about the SVD of their covariance matrix $$C_{XY} = X Y^T = U S V^T$$? What would be the (geometrical) implications of $U_i$ and $V_i$?


In other word, the eigenvector for the largest singular value (eigenvalue) of $C_{XX}$ corresponds to the direction that has the max variance when projecting $X$ data onto that axis.

Does $U_1$ and $V_1$ have similar properties that make $X$'s and $Y$'s projections have the largest covariance?

ddzzbbwwmm
  • 101
  • 2
  • 2
    $C$ is only a covariance matrix if $X$ has been scaled and centered. https://stats.stackexchange.com/questions/134282/relationship-between-svd-and-pca-how-to-use-svd-to-perform-pca Aside from that, what do you mean when you ask for the "implication" of a vector? – Sycorax Sep 07 '21 at 14:43
  • 1
    @Sycorax maybe I should say 'geometrical meaning'. In the example of PCA, the eigenvectors are easy to understand (also in your link concerning PCA and SVD). what I'm not clear is concerning the SVD of the covariance between two groups of observations, not like the PCA for only one group of observations and are not difficult to understand. – ddzzbbwwmm Sep 07 '21 at 15:04
  • I think this answers your question https://math.stackexchange.com/questions/1670675/interpretation-of-svd-for-non-square-matrices Here are some more: https://math.stackexchange.com/search?q=svd+interpret+answers%3A1+score%3A3 – Sycorax Sep 07 '21 at 15:17

0 Answers0