So I'm learning about deep learning. I first learned about stacked auto-encoders and now I'm learning about Restricted Boltzmann Machines. However non in the papers/tutorials I read I found them motivating why would one want to use RBM instead of auto-encoders. So what are the advantages of RBM over stacked auto-encoders? And when should one use RBM or auto-encoders?
Asked
Active
Viewed 2,104 times
1 Answers
1
Auto-encoders typically feature many hidden layers. This causes a variety of problems for the common backpropagation-style training methods, because the backpropagated errors become very small in the first few layers.
A solution is to do pretraining, e.g. use initial weights that approximate the final solution. One pretraining technique treats a set of two layers like an RBM to obtain a good set of starting weights which are then fine-tuned using backpropagation. RBMs are useful here because contrastive divergence does not suffer from the same issues as backpropagation.

Marc Claesen
- 17,399
- 1
- 49
- 70
-
1Could you please explain the reason why RBMs are used in pretraining of auto-encoders? – robit Jun 21 '17 at 06:35