I have built 2 models:
1) precision: 0.80 - AUC ROC: 0.69
2) precision: 0.90 - AUC ROC: 0.94
I have posted both them to Kaggle as Titanic competition, the first model scored 0.7 and the second one scored 0.4. So I know the second model has been over-fitted. How can I find it out before sending results to Kaggle using plots or python codes? Does CAP curve do that? Can I find it out using ROC curve?
I have used train_test split for the Kaggle training dataset.