Hi are there any tips or procedures in order to get a neural network which generalizes over whole data and difference between training loss error and test error does not increase dramatically over time? what can we do at all except just training and watching if training loss error and test error diverge or not?
Asked
Active
Viewed 20 times
0
-
2GAM is a specific model, so I removed the tag. I marked the question as a duplicate of another one. Hope you find your answer there. – Tim Jun 23 '21 at 10:35