When working with GAN's, is there a clear theoretical difference of choosing my latent distribution to be a uniform vs a normal distribution?
My intuition says that if I learn from a $z\sim N(\mu,\sigma)$ latent variable that has a small $\sigma$, that would mean that the samples my generator would produce would be very sparse and similar to each other; as if I were spanning a smaller space of Generated samples for my generator function $G(z)$ and the discriminator $D(x)$ would have a harder time discriminating between real and fake data if $G(z)$ is very close to $x \sim p_{data}(x)$.
On the other hand if $z\sim U[0,1]$, then perhaps the variability is less likely, but I believe that with an equally likelihood of exploring all the $[0,1]$ space then the Generator has a higher exploration space if viewing this from an exploration vs exploitation framework (as $G(z)$ is equally likely to hit all locations in the Generator space, vs a subset).
My thoughts here are somewhat hand-wavy and I'm looking for a paper, reference or proof that would confirm or reject what I'm thinking about.