Short answer: it is neither wrong nor new.
We've been discussing this validation scheme under the name "set validation" ≈ 15 a ago when preparing a paper*, but in the end never actually referred to it as we didn't find it used in practice.
Wikipedia refers to the same validation scheme as repeated random sub-sampling validation or Monte Carlo cross validation
From a theory point of view, the concept was of interest to us because
- it is another interpretation of the same numbers usually referred to as hold-out (just the model the estimate is used for is different: hold-out estimates are used as performance estimate for exactly the model tested, this set or Monte Carlo validation treats the tested model(s) as surrogate model(s) and interprets the very same number as performace estimate for a model built on the whole data set - as it is usually done with cross validation or out-of-bootstrap validation estimates)
- and it is somewhere in between
- more common cross validation techniques (resampling with replacement, interpretation as estimate for whole-data model),
- hold-out (see above, same calculation + numbers, typically without N iterations/repetitions, though and different interpretation)
- and out-of-bootstrap (the N iterations/repetitions are typical for out-of-bootstrap, but I've never seen this applied to hold-out and it is [unfortunately] rarely done with cross validation).
* Beleites, C.; Baumgartner, R.; Bowman, C.; Somorjai, R.; Steiner, G.; Salzer, R. & Sowa, M. G. Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005).
The "set validation" error for N = 1 is hidden in fig. 6 (i.e. its bias + variance can be recostructed from the data given but are not explicitly given.)
but it seems not optimal in terms of variance. Are there arguments in favor or against the second procedure?
Well, in the paper above we found the total error (bias² + variance) of out-of-bootstrap and repeated/iterated $k$-fold cross validation to be pretty similar (with oob having somewhat lower variance but higher bias - but we did not follow up to check whether/how much of this trade-off is due resampling with/without replacement and how much is due to the different split ratio of about 1 : 2 for oob).
Keep in mind, though, that I'm talking about accuracy in small sample size situations, where the dominating contributor to variance uncertainty is the same for all resampling schemes: the limited number of true samples for testing, and that is the same for oob, cross validation or set validation. Iterations/repetitions allow you to reduce the variance caused by instability of the (surrogate) models, but not the variance uncertainty due to the limited total sample size.
Thus, assuming that you perform an adequately large number of iterations/repetitions N, I'd not expect practically relevant differences in the performance of these validation schemes.
One validation scheme may fit better with the scenario you try to simulate by the resampling, though.