1

As far as I understand, leave-p-out is better than k-fold:

Leave-p-out or k-fold cross-validation for small dataset?

But I wonder, is there a theoretically better evaluation method than leave-p-out? I was thinking that maybe something like a recursive leave-p-out, that goes through multiple values for p to cover all possible ways the dataset could be split into two parts. And then one part is the training set and the other the test set. This is of course completely impractical as that would be a huge number of splits, but my interest is theoretical.

alvitawa
  • 111
  • 3
  • 1
    Can you please define what you mean by *better*? – usεr11852 Aug 11 '19 at 11:47
  • That it is best able to compare models accuratelly. So that the model that will make the most accurate predictions is also the model with the highest score according to the evaluation method. – alvitawa Aug 11 '19 at 13:55

0 Answers0