I'm having a similar problem to the following post (Feed-Forward) Neural Networks keep converging to mean.
The model is built with Deep Neural Network library in Matlab by Masayuki Tanaka. The library is designed for classification. However, I've managed to change the last layer's transfer function to linear, so that the Deep Belief Network performs regression instead. I'm pretty sure the feed-forward and back propagation is performed correctly.
The model has 2 inputs, 4 hidden layers with 500, 200, 100, 10 units per layer, respectively, and 1 output. The activation of the hidden layers are sigmoid
The data is simply a sin function generated by matlab's sind, you may consider the input as a time series where the 2 features are sin value at t1 and sin value at t2, and I would like to predict sin value at t3
inputdata = [sind(1:1000); sind(2:1001)]; //1000*2 matrix
outputdata = [sind(3:1002)]; //1000*1 matrix
By default, inuptdata is between -1 and 1, and not 0 to 1, I'm assuming the DBN is fine with it. Note that I'm not interested in testing the data with LSTM yet, but with DBN instead.
I would appreciate a hint on what I may be doing wrong, as the model always outputs the mean 0.50 - 0.55. I've tested with fewer layers, and it seemed to predict correctly, but with more than 3 or 4 layers, the model predicts the mean (constant value).
Could it be because of some large/small values of hidden layers when back propagating? I'm really lost, I spent so much time on this..
Thank you for your time.