Questions tagged [restricted-boltzmann-machine]

a Restricted Boltzmann Machine (RBM) is a kind of artificial neural network.

RBM stands for Restricted Boltzmann Machine. It is a kind of artificial neural network. Specifically, an RBM is recurrent (cyclical rather than flowing in only one direction) and is subject to the restriction that all nodes are in only one of two layers, where each node cannot be connected to itself or another node in the same layer.

139 questions
136
votes
4 answers

What is the difference between convolutional neural networks, restricted Boltzmann machines, and auto-encoders?

Recently I have been reading about deep learning and I am confused about the terms (or say technologies). What is the difference between Convolutional neural networks (CNN), Restricted Boltzmann machines (RBM) and Auto-encoders?
56
votes
8 answers

R libraries for deep learning

I was wondering if there's any good R libraries out there for deep learning neural networks? I know there's the nnet, neuralnet, and RSNNS, but none of these seem to implement deep learning methods. I'm especially interested in unsupervised…
29
votes
2 answers

Deep belief networks or Deep Boltzmann Machines?

I'm confused. Is there a difference between Deep belief networks and Deep Boltzmann Machines? If so, what's the difference?
25
votes
2 answers

Restricted Boltzmann Machine : how is it used in machine learning?

Background: Yes, Restricted Boltzmann Machine (RBM) CAN be used to initiate the weights of a neural network. Also it CAN be used in a "layer-by-layer" way to build a deep belief network (that is, to train a $n$-th layer on the top of $(n-1)$-th…
25
votes
2 answers

Autoencoders can't learn meaningful features

I have 50,000 images such as these two: They depict graphs of data. I wanted to extract features from these images so I used autoencoder code provided by Theano (deeplearning.net). The problem is, these autoencoders don't seem to learn any…
19
votes
2 answers

Modern Use Cases of Restricted Boltzmann Machines (RBM's)?

Background: A lot of the modern research in the past ~4 years (post alexnet) seems to have moved away from using generative pretraining for neural networks to achieve state of the art classification results. For example, the top results for mnist…
18
votes
2 answers

Deep learning vs. Decision trees and boosting methods

I am looking for papers or texts that compare and discuss (either empirically or theoretically): Boosting and Decision trees algorithms such as Random Forests or AdaBoost, and GentleBoost applied to decision trees. with Deep learning methods…
15
votes
3 answers

What does the "machine" in "support vector machine" and "restricted Boltzmann machine" mean?

Why are they called "machines"? Is there an origin to the word "machine" used in this context? (Like the name "linear programming" can be confusing but we know why it is called "programming.")
13
votes
3 answers

Feature selection using deep learning?

I want to calculate the importance of each input feature using deep model. But I found only one paper about feature selection using deep learning - deep feature selection. They insert a layer of nodes connected to each feature directly, before the…
12
votes
4 answers

Good tutorial for Restricted Boltzmann Machines (RBM)

I’m studying the Restricted Boltzmann Machine (RBM) and am having some issues understanding log likelihood calculations with respect to the parameters of the RBM. Even though a lot of research papers on RBM have been published, there are no detailed…
Upul
  • 647
  • 1
  • 7
  • 14
11
votes
2 answers

Are graphical models and Boltzmann machines related mathematically?

While I have actually done some programming with Boltzmann machines in a physics class, I am not familiar with their theoretical characterization. In contrast, I know a modest amount about the theory of graphical models (about the first few chapters…
11
votes
3 answers

What is pretraining and how do you pretrain a neural network?

I understand that pretraining is used to avoid some of the issues with conventional training. If I use backpropagation with, say an autoencoder, I know I'm going to run into time issues because backpropagation is slow, and also that I can get stuck…
9
votes
1 answer

Persistent Contrastive Divergence for RBMs

When using the persistent CD learning algorithm for Restricted Bolzmann Machines, we start our Gibbs sampling chain in the first iteration at a data point, but contrary to normal CD, in following iterations we don't start over our chain. Instead we…
8
votes
2 answers

What is the fastest unsupervised feature learning algorithm?

I took a look at several unsupervised feature learning algorithms. Most of them (restricted Boltzmann machines and sparse auto-encoders) have very long training times even on small datasets like MNIST. I wonder if there are similar algorithms that…
alfa
  • 2,505
  • 3
  • 15
  • 15
8
votes
1 answer

Is initializing the weights of autoencoders still a difficult problem?

I was wondering if initializing the weights of autoencoders is still difficult and what the most recent strategies are for it. I have been reading different articles. In one of Hinton's papers (2006), it says: With large initial weights,…
1
2 3
9 10