Let $\mathcal{X}$ be a training set which will feed a binary SVM with RBF kernel. $\mathcal{X}$ consists of $10$ positive examples and $100$ negative examples. I am interested in optimizing the parameters of the above SVM, i.e. the well-known parameters $C$, $\gamma$.
What I am doing now, is to partition the above set, $\mathcal{X}$, into a $70\%$ training subset, and a $30\%$ testing subset, and carry out a grid-search ($3$-fold cross-validation) in order to obtain the best pair $(C_{opt},\gamma_{opt})$.
That is, $\mathcal{X}$ is partitioned such that the following three subsets are created $$ \mathcal{X}_{1},\:\mathcal{X}_{2},\:\mathcal{X}_{3}, $$ and hold, respectively, $4$, $4$, and $3$ positive samples (randomly chosen). Moreover, each subset also consists of a number of negative samples ($34$, $33$, and $33$, respectively), randomly chosen, as well.
The cross-validation procedure, though, does not seem to obtain the optimal parameters.
What would you suggest me to do? Thank you very much in advance!