After reading many posts, I thought of asking: Why should a SVM be biased towards majority class like other classifiers, since an SVM never used the whole data of the training data set—it only uses the support vectors to determine the best hyperplane, maximizing the margins between two classes.
For example, out of 1000 records, having 900 vs 100 class observations for a binary classification problem, then SVM might only use 50 support vectors and leave others. In this case, will it be biased towards majority class? That is, will my ROC curve not show good results?
Posting few links stating this issue with Svm, it is suggesting to use change cost of misclassification for different classes, like any other classifier that uses the entire datasets:
http://www.kdnuggets.com/2016/04/unbalanced-classes-svm-random-forests-python.html/3