I have a bit of problems understanding how PCA and SVD works, as most materials focus on calculating the factors rather than the classification of new entries. In order to provide some context of my questions, let us first consider the following model:
I have some data set $X$ with $p$ features and $n$ rows (i.e. observations). Now suppose I use the first two factors $f_{1}$ and $f_{2}$ to create the following model $y=\beta_{0}+\beta_{1}\cdot f_{1} + \beta_{2}\cdot f_{2}$. Now suppose I get the following row vector $r^{*}$. How can I project $r^{*}$ onto $f_{1}$ and $f_{2}$?
A second question, as far as my understanding goes if we do SVD on matrix $X$ we get: $X=U\Sigma V^{*}$, but which are actually the eigenvectors? And are those eigenvectors the same as what I got with $f_{1}$?
Thank you for helping me understanding.