0

I am training a binary classification algorithm in Keras, the loss is cross-entropy model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy']). What confused me most is that my train_acc and train_loss are not consistent. Sometimes train_acc is 1 but train_loss is near 3. If I think right, if train_accuracy is 1, train_loss should not be greater than 0.693, which is the cross-entropy loss for all predictions are 0.5.

I have checked the data, the label is 0 for false and 1 for true. The other dataset works well with the same code. What can be the possible reason for this uncommon problem? or am I wrong?

I uploaded a picture for illustration. But it does not show up, maybe a bug.

figure 1

YJ. Yang
  • 109
  • 3
  • 1
    Possible duplicate of [Train loss and accuracy](https://stats.stackexchange.com/questions/329862/train-loss-and-accuracy) or https://stats.stackexchange.com/questions/282160/how-is-it-possible-that-validation-loss-is-increasing-while-validation-accuracy and many more https://stats.stackexchange.com/search?tab=votes&q=%5bneural-networks%5d%20accuracy%20loss – Sycorax Mar 29 '19 at 13:01
  • If you indeed have a binary classification task, the *average* cross-entropy loss at the start should be about $\log(2)$. At the beginning of training, you have loss somewhere between 3 and 7, which implies there is a bug somewhere in the network. Without code there's simply nothing that can be done on our end. Advice on debugging neural networks can be found here https://stats.stackexchange.com/questions/352036/what-should-i-do-when-my-neural-network-doesnt-learn/352037#352037 – Sycorax Mar 29 '19 at 14:34

0 Answers0