1

I've been looking at DBNs:

  • first a greedy (unsupervised) layerwise pretraining.
  • now split weights into recognition R and generative G, and apply Wake-Sleep (again unsupervised, i.e. Unlabelled Data)
  • now use back propagation with labelled data for fine tuning.

This approach looks aesthetically very attractive to me.

But I have heard that "According to the keynote i saw from Bengio in december (2016), ReLU units solved the problem of deep nets; rendering pretraining largely obsolete."

Are these pretty techniques set to be consigned to a chapter in the history books of ML? Or are they still relevant?

P i
  • 121
  • 6
  • Closely related, if not a duplicate: http://stats.stackexchange.com/questions/261751/why-are-deep-belief-networks-dbn-rarely-used/267282#267282 and http://stats.stackexchange.com/questions/163600/pre-training-in-deep-convolutional-neural-network/163805#163805 – Sycorax Mar 15 '17 at 13:30

0 Answers0