I am a bit confused about deep belief networks.
Should the RBM output be the input to the feed forward neural network for the fine tuning step or just the weights of the neural network have to be initialized which we get from the RBM?
I am a bit confused about deep belief networks.
Should the RBM output be the input to the feed forward neural network for the fine tuning step or just the weights of the neural network have to be initialized which we get from the RBM?
The point of transfer learning in general is to restrict the parameter space and act as a kind of regularization. You can think of the network as having some 'transferrable skill' from being trained on a tangentially related problem, for which a lot of data is available. The pretrained network then only has to learn how the target task is different from the task it was pretrained on, which allows you to use complex networks even when only limited data is available for the target task. To take image classification as an example: The idea is that the pretrained network is already able to discern shapes and features from the pixel intensities of an image.
What is then fine tuning? There are different ways in which you can use a pretrained network. Fine tuning refers to the case where you use the pretrained network as a starting point, and then continue training all parameters with the target task. It is the latter of the two options listed in your question: The weights of the network are initialized by the first task.
As shown in the linked answer, fine tuning is not the only option: You could also freeze the pretrained layers and add a dense layer at the end that connects to the output layer. Only these new connections to the output at the end are trained by the data you have on the target task.