0

I have been reading up a bit on generative models particularly trying to understand the math behind VAE. While looking at a talk online, the speaker mentions the following definition of marginal likelihood, where we integrate out the latent variables:

$$ p(x) = \int p(x|z)p(z)dz $$

Here we are marginalizing out the latent variable denoted by z.

Now, imagine xare sampled from a very high dimensional space like space of all possible images of a given size but the prior p(z)is a unit Gaussian distribution. I am trying to understand why this would be difficult to evaluate considering p(z)is one dimensional.

Luca
  • 4,410
  • 3
  • 30
  • 52

1 Answers1

2

$z$ is still fairly high dimensional, typical values might range between 16 and 1000. This is still much lower than the dimension of $x$, which might be on the order of 10000 to 1 million. $p(z)$ is a standard multivariate gaussian, with identity covariance.

shimao
  • 22,706
  • 2
  • 42
  • 81
  • Thanks for replying. But in this talk there is only a single latent variable where the prior is a unit Gaussian. I am wondering why the integral is then still difficult to compute. This also comes up in this Stanford talk (https://www.youtube.com/watch?v=5WoItGTWV54&t=2525s) at 31.12. However, the speaker does not really specify what the problem is with this integral. – Luca Nov 21 '21 at 21:24
  • @Luca I don't see anywhere where the lecturer claims $z$ is only one dimensional. I agree in that case, the integral can be tractably evaluated. – shimao Nov 21 '21 at 21:29
  • Thanks for clarifying. I must have got myself confused. – Luca Nov 21 '21 at 21:34