Bootstrapping and CV have tradeoffs:
Bootstrapping can be used to generate an enormous number of "new" datasets which are "noisy" versions of the original dataset (in the sense that the samples are weighted differently than in the original dataset). However, each bootstrap sample sees approximately 2/3 of the samples, irrespective of the dataset size.
Cross validation allows controlling more precisely the number of samples used. In 10-fold cross validation, for example, each iteration will train on (ignoring roundoff errors) approximately 90% of the data.
Bootstrapping, therefore, is useful for things like training many high-variance low-bias predictors (like the trees in a random forest), or estimating confidence intervals of a statistic using many iterations. Conversely, it can underestimate the actual performance of the predictor.
As you pointed out in your excellent comment below, Breiman indeed states that the OOB error rate is a good estimator for OOS error rate. With all enormous respect due to Breiman, FWIW, I've seen different results, especially for datasets with very low SNR. Some of the answers to this question state this too. I assume that, asymptotically, for a given SNR, if the datasets grow large enough, Breimann is correct; in practice, you might want to be cautious about it.