1

I have a data matrix $M$ that has $n$ samples (rows) described by $m$ variables (columns) $X_1,X_2,\ldots X_m$. I do a SVD to reduce the $m$ dimensions to just 3 dimensions. I understand that the $x,y,z$ coordinates (i.e., the SVD values) are calculated from the eigenvectors of $MM^T$.

My question is, if I pick an arbitrary point in the SVD space (i.e. a value for SVD1, SVD2, SVD3, not necessarily in the data), is there a convenient way to translate that back to a set of the original variables (i.e., $X_1, X_2, \ldots X_m$)?

amoeba
  • 93,463
  • 28
  • 275
  • 317
chetak
  • 29
  • 1
  • 3
  • 1
    Please use math typesetting. More information: http://meta.math.stackexchange.com/questions/5020/mathjax-basic-tutorial-and-quick-reference – Sycorax Aug 03 '16 at 16:25
  • "Reducing $m$ via SVD to 3" means that you take the first three terms from the [dyadic expansion](https://en.wikipedia.org/wiki/Singular_value_decomposition#Applications_of_the_SVD) of the matrix $X=(X_1, X_2, \dotsc ,X_m)$? Also, when you write the SVD values, I am not sure what is matrix $T$ and I assume that $M=X$, right? "By picking arbitrary point in SVD" you mean taking an arbitrary 3-dimensional vector? Please, try to specify these questions, I am very confused by the presented question in that form. – michalOut Aug 03 '16 at 16:29
  • M represents the(nxm) matrix.MT represents the matrix transpose. Yes, the arbitrary point refers to an arbitrary 3-dimensional vector in the new vector space. – chetak Aug 04 '16 at 10:12
  • Please edit your question to more clearly reflect your intent – Glen_b Aug 12 '16 at 03:20
  • @chetak Your question has been closed as a duplicate, please take a look there and feel free to ask any additional questions if you have any. I wrote my answer there trying to take your question into account as well. I have a section about SVD in my answer. – amoeba Aug 12 '16 at 11:32

1 Answers1

1

Not really, no. Take a look at this picture:

SVD

In this case, we have two dimensions. Let's say we reduce it to 1 (just $U_1$). What will happen is that all the points will be projected onto $U_1$ so many points (an infinite number, in fact) will map to the same point on $U_1$. The same will happen with any dimensionality reduction via SVD.

roundsquare
  • 700
  • 3
  • 13
  • 1
    Sure, so in your example above how do I transform the "same point on U1" to a point on the original axis. I understand that this would be an approximation but I'd like to understand the formula behind the transformation. See "inverse_transform" in scikit - http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html#sklearn.decomposition.TruncatedSVD.inverse_transform – chetak Aug 04 '16 at 06:38
  • @chetak conceptually, you take that point on $U_1$ and see where it is on the $L$ and $W$ axis. You can see how scikit-learn does it by looking at the source of the "inverse_transform" function. If you have a row-vector $X=[x_1,x_2,...x_k]$ (where $k$ is the number of dimensions you reduced to) then you can multiply it by the $k \times n$ matrix where each row is one of the eigen-vector. Note that scikit learn Let's $X$ be a matrix with any number of rows and $k$ columns - in this case, each row is a point that is being transformed back to the original space. – roundsquare Aug 04 '16 at 16:10
  • thanks @roundsquare, what will be an intuitive explanation for this multiplication? – chetak – chetak Aug 11 '16 at 07:27