In the context of Factorization Method latent features are usually meant to characterize items along each dimension. Let me explain by example.
Suppose we have a matrix of item-users interactions $R$. The model assumption in Matrix Factorization methods is that each cell $R_{ui}$ of this matrix is generated by, for example, $p_u^T q_i$ — a dot product between latent vector $p_u$, describing user $u$ and a latent vector $q_i$, describing item $i$. Intuitively, this product measures how similar these vectors are. During training you want to find "good" vectors, such that the approximation error is minimized.
One may think that these latent features are meaningful, that is, there's a feature in user's vector $p_u$ like "likes items with property X" and corresponding feature in item's vector $q_i$ like "has property X". Unfortunately, unless it's somehow enforced, it's hard to find interpretable latent features. So, you can think of latent features that way, but not use these features to reason about the data.