How to prove that for the radial basis function $k(x, y) = \exp(-\frac{||x-y||^2)}{2\sigma^2})$ there is no finite-dimensional feature space $H$ such that for some $\Phi: \text{R}^n \to H$ we have $k(x, y) = \langle \Phi(x), \Phi(y)\rangle$?
-
1Is this question more appropriate for Mathematics? – Leo Sep 20 '12 at 00:37
-
1One possible plan of attack would be to exhibit a subspace of $H$ that is not closed. – Nick Alger Sep 28 '12 at 07:36
-
@Nick Alger: maybe this helps: http://stats.stackexchange.com/questions/80398/svm-in%EF%AC%81nite-dimensional-feature-space/168309#168309 – Aug 22 '15 at 08:33
2 Answers
The Moore-Aronszajn theorem guarantees that a symmetric positive definite kernel is associated to a unique reproducing kernel Hilbert space. (Note that while the RKHS is unique, the mapping itself is not.)
Therefore, your question can be answered by exhibiting an infinite-dimensional RKHS corresponding to the Gaussian kernel (or RBF). You can find an in-depth study of this in "An explicit description of the reproducing kernel Hilbert spaces of Gaussian RBF kernels", Steinwart et al.

- 132,789
- 81
- 357
- 650

- 169
- 2
Assume that Gaussian RBF kernel $k(x, y)$ is defined on domain $X \times X$ where $X$ contains an infinite number of vectors. One can prove (Gaussian Kernels, Why are they full rank?) that for any set of distinct vectors $x_1, ..., x_m \in X$ matrix $(k(x_i, x_j))_{m \times m}$ is not singular, which means that vectors $\mathrm\Phi(x_1), ..., \mathrm\Phi(x_m)$ are linearly independent. Thus, a feature space $H$ for the kernel $k$ cannot have a finite number of dimensions.
-
Here you find a more 'intuitive' explanation that the $\Phi$ can map onto a spave of dimension equal to the size of the training sample, even for an infinite training sample: http://stats.stackexchange.com/questions/80398/svm-in%EF%AC%81nite-dimensional-feature-space/168309#168309 – Aug 22 '15 at 08:36