My question is specifically about kaggle competitions. Why would I need to use the cross-validation method if I can just use all training data and then see the accuracy on kaggle?
The only reason of cross-validation that I see is that by dividing the training data I will be able to see the accuracy of model on training data, but that does not seem to be really handful, as it does not guarantee the same accuracy on real testing data.