I have problems teaching a Neural Network to learn a game. The following NeuralNetwork does not converge suitably. In the "Game" the player has to move a red dot by applying a force to it. a human uses his mouse for this purpose while a NeuralNetwork shall just regress the applied force. If a human plays, everything looks similar like this:
Human playing the Game - Video
The NeuralNetwork shall take the following Inputs:
- the players position (x,y)
- the players speed (x,y)
- the position of speed of all "birds" (x,y)
- the radius of the birds (r)
- (I also tried: the last applied force - does not help)
and find apply a force on the playing to move him across the area. As it turns out the following NeuralNetwork does a bad job:
public GamePlayingNetwork(int inputNodes, int outPutNodes) {
nn = new MultiLayerPerceptron(inputNodes,36,36, outPutNodes); //creating fully-mashed NN, Also tried less and more Nodes
for (Layer l : nn.getLayers()) {
for (Neuron n : l.getNeurons()) {
n.setTransferFunction(new org.neuroph.core.transfer.Linear()); //Sigmoid does not work better, Recitifer does not converge
}
}
nn.getLayerAt(3).getNeuronAt(0).setTransferFunction(new Linear());
nn.getLayerAt(3).getNeuronAt(1).setTransferFunction(new Linear());
nn.randomizeWeights(new NguyenWidrowRandomizer(-1,1)); // works goods
System.out.println(nn.getWeights().length);
MomentumBackpropagation bp = new MomentumBackpropagation();
bp.setMomentum(10); //high momentum does work good
bp.setLearningRate(0.015); //learning rates over 0.015 resulting in a NaN - Error (weights become just NaN)
bp.setErrorFunction(new MeanSquaredError());
bp.addListener(this);
bp.setMaxIterations(1500); //more iterations does not help
nn.setLearningRule(bp);
}
public void train(ArrayList<VectorND> samples) {
System.out.println("Training with " + samples.size() + " sample(s)... this could take a while ");
DataSet set = new DataSet(44, 2);
for (VectorND nd : samples) {
double[] all = nd.getArray();
double[] input = Arrays.copyOfRange(all, 0, nd.length() - 2);
double[] output = Arrays.copyOfRange(all, nd.length() - 2, nd.length());
set.addRow(input, output);
}
set.shuffle();
nn.learn(set);
System.out.println("Training Done");
}
AI playing the game not effective enough - Video
EDIT: This question is not a duplicate. I am searching for a way to make a neural network more effective given an unchangeable DataSet,the problem on the DataSet is the following (I found this out after I wrote the article):
In the DataSet are big amounts of data, where nothing happens. So the Network will learn what it should do when nothing happens. Just a few amounts of Data show what should happen when the Player is actually in trouble. Therefore the neural net joins a local minimum, where it knows exactly what to do when nothing critical is happening but does not know what to do when it gets critical. It continues to do the same thing in critical moments as it has done in uncritical moments...
I hope for your intentions and help :)
Kind regards, Niclas