I would like to perform $K$-fold cross-validation on a dataset to test generalization of a model.
For about the same amount of computational effort, I could perform one full cross validation with say $K = 20$, or two cross-validations with $K = 10$ but playing around with completely different partitioning schemes for the data.
Is there any reason to prefer the former to the latter?
In other words, is increasing $K$ always a good thing?
As an argument against it, I can imagine that leaving out a good chunk of the data (possibly by partitioning the data in a smart way, and trying out different ways of partitioning) might help test extrapolation to unseen regions.