0

In most Variational Autoencoder architectures I have seen the following forward path:

x = torch.nn.Linear(in_features,hidden_features)(input)
x = nn.ReLU()(x)
x_mean = torch.nn.Linear(hidden_features,code_features)(x)
x_logvar = torch.nn.Linear(hidden_features,code_features)(x)

where in the last part we get log variance. So my question is:

Why do we get log variance from the encoder? My understanding is, that we might get negative numbers after torch.nn.Linear(hidden_features,code_features)(x) and for numerical stability we need positive numbers. Am I correct? If you have more complete answer, you are welcome to add/correct me.

Daniel Yefimov
  • 1,222
  • 1
  • 11
  • 27

0 Answers0