3

As I take it autoencoders are pretty much fully-connected multi-layer-perceptrons which try to reconstruct the input. So the aim should be to learn "an encoding", a distributed representation for some data set...but I don't really get the benefit?

I mean, if I have some MLP with an input layer of 5 neurons, a hidden layer of 3 neurons and an output layer of 5 neurons and e.g. an input like (1, 2, 3, 4, 5) the autoencoder should deliver the output (1, 2, 3, 4, 5)?! So why do I need the autoencoder at all?

For some "classic" MLP the objective maybe is to classify some data, so the MLP actually calculates "something"...but the autoencoder always tries to reconstruct its input, so what is the autoencoder calculating? Pretty much nothing?

daniel451
  • 2,635
  • 6
  • 22
  • 26

0 Answers0