5

As far I understood, the hidden layer outputs can be anything based on the learning algorithm or optimization rules. I was wondering if it possible to some constraints on the layer output. For instance, say, I want my layer outputs (for a particular layer) to follow a multivariate Gaussian distribution with certain predefined (or even estimated) mean and covariance. Is it possible to customize the neural network architecture in such a manner? If not, can I just make sure that the activation outputs for a particular layer follow a Gaussian distribution? Some pointers to the related literature will be very helpful.

talk2speech
  • 518
  • 1
  • 6
  • 14
  • This sounds very similar to what a variational auto-encoder does, if you use only the encoder at the end. – Frans Rodenburg Mar 01 '18 at 11:56
  • @FransRodenburg: Thanks. Could you please elaborate a bit more? – talk2speech Mar 01 '18 at 12:08
  • A variational autoencoder encodes a latent representation of the data constrained to a normal distribution (i.e. it learns the positions of the observations along a normal distribution). Usually you try to reconstruct the original data with a decoder, after which you can use the trained encoder for e.g. classification by adding a layer on top of it. However, adding a layer on top of it means it is no longer constrained to a normal distribution, so I'm not entirely sure how I would go about it. – Frans Rodenburg Mar 01 '18 at 12:11
  • Sounds very interesting. I will have look. Now, if I do not use any non-linear layer, instead, if I just use linear regression on the normally distributed encoded output. My job will be done. I just need a latent representation of my data which follows a normal distribution. What do you think? Thanks for your help. – talk2speech Mar 01 '18 at 12:22
  • That sounds reasonable to me, but I have just starting using VAEs myself, so I am not confident enough to answer your question (hence the long comment). I liked this explanation: http://kvfrans.com/variational-autoencoders-explained/ – Frans Rodenburg Mar 01 '18 at 13:10
  • Thanks Frans for the pointer. I also someone already applied VAE to solve a similar problem as mine. This looks convincing enough. – talk2speech Mar 02 '18 at 09:14
  • I am working on a similar problem. I want to constrain the output to have unit norm, so the sum of all the activations on the output layer adds to one. Have you found any way to enforce this kind of constraint? – jeffery_the_wind Nov 22 '19 at 18:38

0 Answers0