0

I've created two neural networks for a prediction purposes, the first is a network with one hidden layer and the second is two hidden layers, I use the cross validation techniques, the training error for both networks is around 2E-4 but at the test step, for the two topologies, the prediction values do not change over the time, (just a small change in a fraction of 5).

I've tried different functions for the hidden layers (between tangh and sigmoid) and a sigmoid for the output layer.

So, where is the problem?

Thanks.

user20053
  • 49
  • 1
  • 4
  • It's not clear what you are doing. You say the evaluations don't change over time at the test stage? If you aren't training them, they shouldn't change. They are supposed to change during training. – Douglas Zare Jan 30 '13 at 05:27
  • You need to provide a lot more details: what kind of optimization technique are you using? How do you initialize the weights? What are the ranges of your inputs and outputs? What loss are you using? – bayerj Jan 30 '13 at 08:30
  • I want to do some predictions, the set of my input data is normalized [0,1], so for the output I've used a sigmoid function. for the training I've used the mean squared error, and the weights are randomly chosen between 0 and 1. – user20053 Jan 30 '13 at 20:18

0 Answers0