I'm doing positive/negative classification of a imbalanced dataset. About 30% of the samples in the data are positive, and the rest are negative. With some tuning of parameters and classification algorithms, I've made an SVC (using the RBF kernel) with about 86% accuracy, 46% precision and 26% recall. These are all pretty good numbers, but for this particular dataset, I'm wanting the precision to be higher, even at the cost of recall. Basically, I want to err on the side of positive to improve precision.
How do you generally "bias" a classifier for a particular class?
Thanks in advance!