Note:
leave-5-out CV = 10-fold CV for $n = 50$
leave-10-out CV = 5-fold CV for $n = 50$
For the choice of $k$/$p$ please see Choice of K in K-fold cross-validation
Update: exhaustively testing all possible splits
I suggest that you read up about iterated/repeated cross validation which fills the gap between testing each sample once and testing all possible splits exhaustively.
Testing each sample more than once allows to measure the stability of the predictions wrt. slight changes in the training data. But it obviously does not increase the number of independent test cases.
Thus, random error due to model instability is reduced, but the random error due to the finite (small) number of test cases is not affected.