Analysis of the Problem
The SVD of a matrix is never unique. Let matrix $A$ have dimensions $n\times k$ and let its SVD be
$$A = U D V^\prime$$
for an $n\times p$ matrix $U$ with orthonormal columns, a diagonal $p\times p$ matrix $D$ with non-negative entries, and a $k\times p$ matrix $V$ with orthonormal columns.
Now choose, arbitrarily, any diagonal $p\times p$ matrix $S$ having $\pm 1$s on the diagonal, so that $S^2 = I$ is the $p\times p$ identity $I_p$. Then
$$A = U D V^\prime = U I D I V^\prime = U (S^2) D (S^2) V^\prime = (US) (SDS) (VS)^\prime$$
is also an SVD of $A$ because $$(US)^\prime(US) = S^\prime U^\prime U S = S^\prime I_p S = S^\prime S = S^2 = I_p$$ demonstrates $US$ has orthonormal columns and a similar calculation demonstrates $VS$ has orthonormal columns. Moreover, since $S$ and $D$ are diagonal, they commute, whence $$S D S = DS^2 = D$$ shows $D$ still has non-negative entries.
The method implemented in the code to find an SVD finds a $U$ that diagonalizes $$AA^\prime = (UDV^\prime)(UDV^\prime)^\prime = UDV^\prime V D^\prime U^\prime = UD^2 U^\prime$$ and, similarly, a $V$ that diagonalizes $$A^\prime A = VD^2V^\prime.$$ It proceeds to compute $D$ in terms of the eigenvalues found in $D^2$. The problem is this does not assure a consistent matching of the columns of $U$ with the columns of $V$.
A Solution
Instead, after finding such a $U$ and such a $V$, use them to compute
$$U^\prime A V = U^\prime (U D V^\prime) V = (U^\prime U) D (V^\prime V) = D$$
directly and efficiently. The diagonal values of this $D$ are not necessarily positive. (That is because there is nothing about the process of diagonalizing either $A^\prime A$ or $AA^\prime$ that will guarantee that, since those two processes were carried out separately.) Make them positive by choosing the entries along the diagonal of $S$ to equal the signs of the entries of $D$, so that $SD$ has all positive values. Compensate for this by right-multiplying $U$ by $S$:
$$A = U D V^\prime = (US) (SD) V^\prime.$$
That is an SVD.
Example
Let $n=p=k=1$ with $A=(-2)$. An SVD is
$$(-2) = (1)(2)(-1)$$
with $U=(1)$, $D=(2)$, and $V=(-1)$.
If you diagonalize $A^\prime A = (4)$ you would naturally choose $U=(1)$ and $D=(\sqrt{4})=(2)$. Likewise if you diagonalize $AA^\prime=(4)$ you would choose $V=(1)$. Unfortunately, $$UDV^\prime = (1)(2)(1) = (2) \ne A.$$ Instead, compute $$D=U^\prime A V = (1)^\prime (-2) (1) = (-2).$$ Because this is negative, set $S=(-1)$. This adjusts $U$ to $US = (1)(-1)=(-1)$ and $D$ to $SD = (-1)(-2)=(2)$. You have obtained $$A = (-1)(2)(1),$$ which is one of the two possible SVDs (but not the same as the original!).
Code
Here is modified code. Its output confirms
- The method recreates
m
correctly.
- $U$ and $V$ really are still orthonormal.
- But the result is not the same SVD returned by
svd
. (Both are equally valid.)
m <- matrix(c(1,0,1,2,1,1,1,0,0),byrow=TRUE,nrow=3)
U <- eigen(tcrossprod(m))$vector
V <- eigen(crossprod(m))$vector
D <- diag(zapsmall(diag(t(U) %*% m %*% V)))
s <- diag(sign(diag(D))) # Find the signs of the eigenvalues
U <- U %*% s # Adjust the columns of U
D <- s %*% D # Fix up D. (D <- abs(D) would be more efficient.)
U1=svd(m)$u
V1=svd(m)$v
D1=diag(svd(m)$d,n,n)
zapsmall(U1 %*% D1 %*% t(V1)) # SVD
zapsmall(U %*% D %*% t(V)) # Hand-rolled SVD
zapsmall(crossprod(U)) # Check that U is orthonormal
zapsmall(tcrossprod(V)) # Check that V' is orthonormal