I have read that the dimensionality of the feature map of the Gaussian Kernel is infinite. However I saw another post (Kernel SVM) stating that the feature map for a Kernel SVM maps to a dimensionality of at most the size of the training sample.
These seem contradictory to me, as surely if you used the Gaussian Kernel for SVM we would then be in a vector space of infinite dimension.
My main question is, does the number of components of the $\phi$(x) dictate the dimensionality of the feature space?
For example, does this correspond to a feature space of dimensionality 4 $$ \phi(x_i) = \left (1, 2x_i, (5x_i)^2, x_i^{\frac{1}{2}} \right) $$