Suppose I have a training data-set A which is m x n, with samples arranged as rows, and I use singular value decomposition to find k principal directions defining a lower-rank space, k < n.
The output of SVD, named by convention U, Σ, V gives me an approximation for my training set: A approximately equal to U Σ transpose(V)
Moreover, the product U Σ is m x k with each row representing a vector in the lower-rank space that approximates the corresponding row in the rank-n original space. From this "lossy" data, it is also possible to return to the rank-n space by right-multiplying with V.
All of this is fine and well but let's say I now train some learning system using U Σ as my training set instead of the original A. In order to later exploit the trained system, I would have to query it using vectors in the lower rank space.
Assuming I have a query vector Q which is 1 x n and not present in the training set. I want to find the 1 x k vector in the lower-rank space that most appropriately matches Q. How do I do this?
Or am I completely missing the point of SVD and misunderstanding how it is useful for dimensionality reduction?