I know there's been some discussion on differences between CV and bootstrapping for estimating out-of-sample prediction error of a classifier.
For example, in here (Differences between cross validation and bootstrapping to estimate the prediction error), here (Bootstrapping estimates of out-of-sample error) and here (What is the .632+ rule in bootstrapping?).
I'm interested in, however, maximizing AUC directly, and not the prediction error (1 - accuracy) itself as the cutoff points are not specified a priori.
Would the reasoning of the posts above apply? I find it difficult, for example, to calculate the AUC with only 10 observations (assuming a CV10 applied to a sample of 100 observations).
Currently I'm using the "optimism" bootstrap estimator, though it is pretty expensive (at least for the PC I access to).
Any thoughts?