0

I am training a CNN and it seems no that matter what I do my validation loss is always much greater than my training loss. To my understanding, this should mean my model is always overfitting my data set.

INFORMATION: My model is built in Keras running on tensorflow=1.14. My model is a binary cross-entropy. I Have a data set of about 2500 tagged photos. My tensor board:enter image description here

  • 2
    I think you're right: this sounds like [tag:overfitting]. Does this answer your question? [What should I do when my neural network doesn't generalize well?](https://stats.stackexchange.com/questions/365778/what-should-i-do-when-my-neural-network-doesnt-generalize-well) – Sycorax Jun 16 '20 at 16:55
  • The problem then is that its overfitting from the first epoch. I have a relatively decent-sized data set so does that mean that there is a problem with my network architecture? – Kay Jay Jun 16 '20 at 17:02
  • Could be. Two of the suggestions in the linked thread are to try a different architecture, or a different regularization level. Using a neural network is a lot like picking a lock: you've got to line up all the tumblers just right or else the lock won't open. Likewise, you can't expect to immediately get a good model on the first try, but try different alternatives to get the desired result. – Sycorax Jun 16 '20 at 17:04

0 Answers0