I used this code which uses RNNs for spam detection and got reasonable results. But when I use the same code for sentiment analysis, the model overfits badly: its training accuracy keep growing, but its test accuracy remains the same. The dictionary overlap between the training set and test set is almost the same for my two datasets. The number of training samples is also comparable for these two tasks. What is the reason for this overfitting? Does it relate to the nature of the tasks?
Asked
Active
Viewed 369 times
3
-
1It's really hard to know ahead of time if a given model will overfit to a particular task or not. – Aaron Jun 18 '17 at 21:18
-
@Aaron Yes, but now that we know it, I'm seeking for the reason. – Hossein Jun 19 '17 at 12:26