Possible Duplicate:
What are examples where a “naive bootstrap” fails?
Perhaps this question will turn out to be a bit soft, but I think it may have a hard answer. Feel free to move it to discussion or CW if it turns more philosophical.
I am a computer scientist by training, and perhaps because of the old proverb about how things appear when you're holding a hammer, bootstrapping seems like the most natural solution to just about every problem I see in hypothesis testing.
For example, today, while answering a question here in CV, I even suggested bootstrapping for what was really just a straightforward t-test situation (without noticing what it reduced to!).
This lead me to a couple of questions:
First, is there a reason to avoid bootstrapping other than the tendency to inadvertently conceal assumptions about the underlying data or distributions? That is, are there reasons to avoid bootstrapping other than "It's easier to shoot yourself in the foot."?
Second, perhaps a little more philosophically, in many fields, my own included, one sees a lot of bad statistics. I'm curious about whether, for a person operating with little statistical education, it is easier to break things when using conventional frequentist methods of hypothesis testing or using bootstrapping.
Obviously the conventional methods are harder to break in the sense that assumptions are clearly stated, but in my experience, people often do not read the fine print, and ignore the assumptions anyway. In the case that a person blindly applies each method to the data, do you think they are more likely to draw bad conclusions using bootstrapping than they would when applying conventional methods of inference without heeding the underlying assumptions?