if $X$ is observed data matrix and $Y$ is latent variable then
$$X=WY+\mu+\epsilon$$
Where $\mu$ is the mean of observed data, and $\epsilon$ is the Gaussian error/noise in data, and $W$ is called principal subspace.
My question is when normal PCA is used we would get a set of orthonormal eigenvectors $E$ for which following is true
$$Y=EX$$
But in PPCA, $W$ is neither orthonormal nor eigenvectors. So how can I get principal components from $W$?
Following my instinct, I searched for ppca in MATLAB, where I came across this line:
At convergence, the columns of W spans the subspace, but they are not orthonormal. ppca obtains the orthonormal coefficients, coeff, for the components by orthogonalization of W.
I modified ppca code a little to get the W, ran it and after orthogonalization I did get P from W.
Why this orthogonalization gave eigenvectors, along which most of variance will be seen?
I am assuming, orthogonalization is giving me a set of orthogonal/orthonormal vectors which span principal subspace, but why this orthogonalized resultant matrix is equal to eigenmatrix(I know that eigenmatrix in pca also orthonormal)? Can I assume principal subspace is spanned only by a unique set of orthonormal vectors? In that case both result will coincide always.