5

The Hilbert-Schmidt integral operator determines the underlying measure, if a universal kernel is used. Now, do eigenvalues of the Hilbert-Schmidt integral operator determine the underlying measure up to translation, reflection and rotation?

Details: Suppose we have a measure $\mu$ on a Euclidean space $X=\mathbb R^n$ and a kernel $\kappa: X \times X \rightarrow \mathbb R$ (which is symmetric and every Gram matrix $G$ defined by $G_{i,j} = \kappa(x_i, x_j)$ from a finite set $\{x_1, \dots x_m\} \subset X$ is positive semidefinite). Assume that $\kappa$ is given by distance function: $\kappa(x,y) = \kappa_0(\|x-y\|)$, for example $\kappa(x,y) = e^{-\gamma \|x-y\|^2}$. Suppose we are given a class of measures $\mathcal S$ on $X=\mathbb R^n$, for example those induced by continuous probability density functions that integrate to 1.

We then define the Hilbert-Schmidt integral operator: $$K_{\mu}: \phi \mapsto \int_X \kappa(x,-) \phi(x) \text{d} \mu (x)$$ The operator is both compact and self-adjoint, and thus admits an orthogonal decomposition and real eigenvalues by the spectral theorem. Let's define its vector of eigenvalues by: $$\vec \lambda(\mu) = (\lambda_1, \lambda_2, \cdots) \text{ where } \lambda_i \phi_i = K_{\mu} \phi_i \text{ and } \lambda_1 \ge \lambda_2 \ge \cdots $$

We immediately observe that $\vec \lambda(\mu) = \vec \lambda (\rho \cdot \mu)$ where $\rho$ is an isometry of $X = \mathbb R^n$ and $\rho \cdot \mu := \mu \circ \rho^{-1}$. This is because $\rho^{-1} \cdot K_{\rho \cdot \mu}(\rho \cdot \phi) = K_\mu(\phi)$ if we define $\rho \cdot \phi = \phi \circ \rho^{-1}$: $$\left( \rho^{-1} \cdot K_{\rho \cdot \mu}(\rho \cdot \phi)\right) (y) = \int_X \kappa(x, \rho y) \phi(\rho^{-1} x) d (\rho \cdot \mu)(x) \\ = \int_X \kappa(\rho x, \rho y) \phi(\rho^{-1} \rho x) d (\rho \cdot \mu)(\rho x) = K_\mu(\phi)(y)$$ so that $$K_\mu \phi = \lambda \phi \implies K_{\rho \cdot \mu} (\rho \cdot \phi) = \rho \cdot (K_\mu \phi) = \rho \cdot (\lambda \phi) = \lambda (\rho \cdot \phi) $$ The question is now whether the converse holds: do we have $\vec \lambda(\mu) = \vec \lambda(\nu) \implies \exists \rho: \mu = \rho \cdot \nu$?

The kernel $\kappa$ is characteristic iff the map $\Phi: \mu \mapsto K_\mu(\textbf{1}) = \mathbb{E}_{x \sim \mu} [\kappa(x,-)]$ is injective. The Gaussian (or RBF) kernel $\kappa(x,y) = e^{-\gamma\|x-y\|^2}$ is an example of a characteristic kernel. Thus, the map $\mu \mapsto K_\mu$ is all the more injective. Therefore, the Hilbert-Schmidt integral operator determines the underlying measure (but isn't invariant with respect to isometry).

My question then concerns how much information we can remove from the operator while telling apart underlying measures up to the isometries of a Euclidean space. Namely, can we do without the eigenvectors?

Uzu Lim
  • 181
  • 5
  • Can you specify which Hilbert space is the HS operator acting on? – Michael Oct 14 '20 at 18:11
  • I think it's fair to consider $L^2(\mathbb{R}^n, \mathbb{R})$. – Uzu Lim Oct 14 '20 at 18:17
  • Ok, so the question is whether the spectrum is sufficient to recover the measure up to isometry. – Michael Oct 14 '20 at 18:44
  • Indeed, that is the question :) – Uzu Lim Oct 14 '20 at 21:08
  • $K_{\rho \cdot \mu}(\rho \cdot \phi) = K_\mu(\phi)$ does not imply $K_{\rho \cdot \mu}$ and $K_\mu$ have the same spectrum. If K is a bounded operator and U is unitary operator, KU and K need not have the same spectrum. (This would be the case if K and U commute, but that does not seem to be the case here.) – Michael Oct 15 '20 at 04:04
  • The correct formula was $\rho^{-1} \cdot K_{\rho \cdot \mu}(\rho \cdot \phi) = K_\mu(\phi)$, and I edited the question accordingly. Thanks for pointing out. – Uzu Lim Oct 15 '20 at 13:52

1 Answers1

3

...do we have $\vec \lambda(\mu) = \vec \lambda(\nu) \implies \exists > \rho: \mu = \rho \cdot \nu$?

For measures that are convex sums of point masses, the answer seems to yes,

Consider the case where $\mu = \delta_{x_0}$, point mass at $x_0$. Then the operator is given by $$ (K_{\delta_{x_0}} \phi)(y) = \kappa(x_0, y) \phi(x_0), $$ where we restrict to $\phi \in C_c(\mathbb{R}^n)$ so that pointwise evaluation makes sense then extend to $L^2$ by denseness of $C_c(\mathbb{R}^n)$ in $L^2$.

Evidently, $K_{\delta_{x_0}}$ is a rank-one operator whose range is the linear span of $\kappa(x_0, \cdot)$. The eigenvalue equation $$ \kappa(x_0, y) \phi(x_0) = \lambda \phi(y) $$ tells us that $\lambda = \kappa(x_0, x_0) = 1$. So $K_{\delta_{x_0}}$ is a rank-one projection onto the the linear span of $\kappa(x_0, \cdot)$.

(If you restrict the question to point masses, then it is true: $K_{\delta_{x_0}}$ and $K_{\delta_{x_1}}$ have the same spectrum $\{1,0,0,\cdots\}$, and $\delta_{x_0}$ can be mapped to $\delta_{x_1}$ by translation.)

Now consider a convex sum $\alpha \delta_{x_1} + (1-\alpha)\delta_{x_2}$, where $0 < \alpha < 1$. By definition, $$ K_{\alpha \delta_{x_1} + (1-\alpha)\delta_{x_2}} = \alpha K_{\delta_{x_1}} + (1- \alpha) K_{\delta_{x_2}}, $$ which is a sum of two rank-one projections. In what follows, things are not as clean as one would like, because the two rank-one projections in the sum do not commute---but we do know the sum is a rank-two self-adjoint operator. In particular, it has two non-zero eigenvalues.

The eigenvalue equation $$ \alpha \kappa(x_1, y) \phi(x_1) + (1 -\alpha )\kappa(x_2, y) \phi(x_2) = \lambda \phi(y) $$ has two non-zero solutions: \begin{align} \lambda_1 &= \alpha + (1-\alpha) \kappa(x_2, x_1) \frac{\phi(x_2)}{\phi(x_1)}, \\ \lambda_2 &= \alpha \kappa(x_2, x_1) \frac{\phi(x_1)}{\phi(x_2)} + (1-\alpha). \end{align} Inspecting these equations show that, for any $\phi$ in the eigenspace corresponding to $\lambda_1$, $\frac{\phi(x_2)}{\phi(x_1)} = \kappa(x_2, x_1)$. Similarly, for any $\phi$ in the eigenspace corresponding to $\lambda_2$, $\frac{\phi(x_1)}{\phi(x_2)} = \kappa(x_2, x_1)$. So we have the spectrum of $K_{\alpha \delta_{x_0} + (1-\alpha)\delta_{x_1}}$: \begin{align} \lambda_1 &= \alpha + (1-\alpha) \kappa(x_2, x_1)^2, \\ \lambda_2 &= \alpha \kappa(x_2, x_1)^2 + (1-\alpha). \quad (*) \end{align}

In other words, the two non-zero eigenvalues are convex sums of $1$ and $\kappa(x_2, x_1)^2$ with weights $\alpha$ and $1-\alpha$.

Let $(\lambda_1, \lambda_2)$ and $(\lambda_1' = \lambda_2')$ be the nonzero eigenvalues of HS operators corresponding to measures of the above type. If $\lambda_1 = \lambda_1'$ and $\lambda_2 = \lambda_2'$, the explicit solution $(*)$ implies that $$ \kappa(x_2, x_1) = \kappa(x_2', x_1'), \mbox{ and } \alpha = \alpha'. $$ Therefore there is an isometry mapping $\alpha \delta_{x_1} + (1-\alpha)\delta_{x_2}$ to $\alpha' \delta_{x_1'} + (1-\alpha')\delta_{x_2'}$. My guess is that this argument extends to general finite convex sums of point masses, i.e. the explicit expression of the eigenvalues should characterize the support of the measure up to isometry.

For general measures, I do not know, although the above argument is somewhat suggestive. Perhaps one can use the fact that finite convex sums of point masses are weak-* dense in the family of Borel probability measures and approximate.

Michael
  • 2,853
  • 10
  • 15
  • Thank you, I firstly edited the action part in the original question to show that $\vec \lambda(\phi) = \vec \lambda(\rho \cdot \phi)$. The right way was to conjugate the isometry and not just compose. – Uzu Lim Oct 15 '20 at 13:40
  • It seems that the key to a possible solution would be to use $\forall x, y, \kappa(x,y) = \kappa(f(x), f(y))$ holds iff $f$ is an isometry. – Uzu Lim Oct 17 '20 at 23:59
  • @FinnLim That's how the other direction follows more or less immediately (from the formula you stated). $f$ induces a unitary operator U, and U*KU and K would have the same spectrum for any bounded operator. But for the converse, I would guess one needs to see, explicitly, how the kernel $\kappa$ shows up in the spectrum. – Michael Oct 18 '20 at 00:07
  • I think there's an error in your eigenvalues, and the desired rigidity may be lost. The equation following "The eigenvalue equation..." is on finding the eigenvalues of the matrix $\begin{bmatrix} \alpha & (1-\alpha)\kappa \\ \alpha\kappa & (1-\alpha)\end{bmatrix} $ where $\kappa = \kappa(x_0, x_1)$ with eigenvalues $\frac12(1 \pm \sqrt{1 + 4\alpha(1-\alpha)(\kappa^2-1)})$. Fixing this doesn't ensure the equality $\alpha=\alpha', \kappa = \kappa'$. – Uzu Lim Oct 22 '20 at 01:27
  • @FinnLim The eigenvalue equation is a functional equation in $\phi(\cdot)$, how does it reduce to your 2-by-2 matrix...? – Michael Oct 22 '20 at 17:16
  • It reduces by plugging in $y=x_1$ and $y=x_2$, which allows the inference of $(\phi(x_1), \phi(x_2))$. Plugging these values back to the functional equation determines $\phi$. – Uzu Lim Oct 23 '20 at 00:09
  • @FinnLim "plugging in y=x1 and y=x2"---yes, agree with this step. "...which allows the inference of (ϕ(x1),ϕ(x2))"---this cannot be right. This implies that, once (ϕ(x1),ϕ(x2)) is "determined", then those two values are the same for all ϕ in that eigenspace---take ϕ and a scalar multiple of ϕ, clearly impossible. Only the ratio ϕ(x1)/ϕ(x2) is fixed in an eigenspace. – Michael Oct 25 '20 at 13:16
  • you're right, i didn't say it precisely there. but it's still true that $\phi$ is determined up to a constant, and the eigenvalues are determined. In any case, my calculation of the eigenvalues differs from that of yours, and I see a remaining one degree of freedom. – Uzu Lim Oct 26 '20 at 19:05
  • @FinnLim Hm, do you agree with the two equations for $\lambda_1$ and $\lambda_2$ after "...has two non-zero solutions:"? – Michael Oct 26 '20 at 19:58
  • I think that equation is incorrect. See my comment above "I think there's an error in your eigenvalues, ..."; it shows the matrix and the associated eigenvalues. – Uzu Lim Oct 27 '20 at 03:11
  • @FinnLim After "plugging in y=x1 and y=x2" into the eigenvalue equation, if you simply divide by ϕ(x1) and ϕ(x2) respectively, you get those two equations. (How is the 2 by 2 matrix obtained? The rank-two operator in question is self-adjoint, whereas your matrix is not symmetric.) – Michael Oct 27 '20 at 11:38
  • the equation following "The eigenvalue equation" isn't symmetric, and that happens despite the original integral operator being self-adjoint because dual space looks different I think. In any case, what I'm claiming should be easy to follow. The equation following "the eigenvalue equation" is equivalent to $[\phi(x_0), \phi(x_1)]^T$ being an eigenvector of the matrix $\begin{bmatrix} \alpha & (1-\alpha)\kappa(x_0, x_1) \\ \alpha \kappa(x_0, x_1) & (1-\alpha)\end{bmatrix}$, and by the usual formula for eigenvalues of 2x2 matrices my eigenvalue calculations follow. – Uzu Lim Oct 27 '20 at 14:57
  • @FinnLim Ok, good point. Then it's a counter-example to the question. "...because dual space looks different I think"---rather it's because the set $\{\kappa(x, \cdot)\}$ is not orthogonal. From this perspective, the geometry of the issue is not as clean. – Michael Oct 28 '20 at 02:18
  • I'll write a self-reply and close the question, maybe. – Uzu Lim Oct 28 '20 at 12:04