For a training data set (60 positive class samples and 40 negative class samples) of SVM learning algorithm. Are the two oversampling methods the same?
(1) bootstrapping 40 negative samples into 60.
(2) bootstrapping both classes into 500 samples.
This question seems similar to existing questions. But it is not. I am aware that I can do undersampling, oversampling, smote, or cost-sensitive learning. But specifically, I'm asking for SVM algorithm and when both situations are oversampling, is there any difference between the two oversampling methods and which one seems more reasonable?