The point of low-rank approximation is not necessarily just for performing dimension reduction.
The idea is that based on domain knowledge, the data/entries of the matrix will somehow make the matrix low rank. But that is in the ideal case where the entries are not affected by noise, corruption, missing values etc. The observed matrix typically will have much higher rank.
Low-rank approximation is thus a way to recover the "original" (the "ideal" matrix before it was messed up by noise etc.) low-rank matrix i.e., find the matrix that is most consistent (in terms of observed entries) with the current matrix and is low-rank so that it can be used as an approximation to the ideal matrix. Having recovered this matrix, we can use it as a substitute for the noisy version and hopefully get better results.