0

Given a bivariate normal distribution $N(0, \Sigma)$ with $\Sigma=\begin{pmatrix}1 & \rho \\ \rho & 1 \end{pmatrix}$ and $\rho > 0$ I want to compute the principal components vector $Y$ using the transformation $Y = \Gamma^{T}(X - \mu)$. I've already calculated my eigenvectors $\Gamma=\begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix}$ and it's given that $\mu = \begin{pmatrix} 0 \\ 0 \end{pmatrix}$ but I'm not sure exactly how $X$ is supposed to look. I eventually want to use each component $(x_1, x_2)$ to find the corresponding loading vectors and compute the variance of each component. Any help would be really appreciated. Thanks

Peter
  • 101
  • 1
  • Just to be sure: the rows of $\Gamma$ are your eigenvectors? $X$ simply is your vector of interest, i.e. $(x_1, x_2)$ . $Y$ is your data represented in this new coordinate system whose axis are the principal components – Sebastian Apr 10 '20 at 14:19
  • 1
    https://stats.stackexchange.com/questions/2691/making-sense-of-principal-component-analysis-eigenvectors-eigenvalues is an excellent explanation of PCA – Sebastian Apr 10 '20 at 14:21
  • @Sebastian the columns of $\Gamma$ are the eigenvectors. – Peter Apr 10 '20 at 14:26
  • Ah sure I confused the Basis Change matrices. Thanks – Sebastian Apr 10 '20 at 14:30
  • The remarks at the end of my post at https://stats.stackexchange.com/a/71303/919 answer your question. – whuber Apr 10 '20 at 15:01

0 Answers0