Although this question is kind of similar to An example of when bootstrap has less bias than classically approximated estimates? I would like to look at the topic from a more general point of view.
As far as I understand it, the major advantages of the bootstrap are:
The bootstrap can deal with any distribution of the underlying sample, no matter if parametric or non-parametric. Maybe you'd have to do some smart trick, e.g. smoothing the edf but in general you can tackle any given sample.
The bootstrap can deal with statistics, that are rather uncanny to handle with "classical" approximation, e.g. constructing CI's for the median.
But are there, aside from these two benefits, any real advantages in performance? Can anybody give me an example of a distribution and a statistic, where the bootstrap estimate is actually more accurate (has less bias/ less variance/ a better covering probability) than estimating by doing calculations based on CLT or equivalent methods that do not involve resampling?
Our professor said, the accuracy of the bootstrap compared to the "classical" method depends on the skewness of the underlying distribution, i.e. the less skewness, the better the bootstrap. Unfortunately he didn't explain it in more detail.