My understanding of the bootstrap approach is based on Wasserman's framework (almost verbatim):
Let $T_n = g(X_1, ..., X_n)$ be a statistic ($X_i$ is the iid sample drawn from distribution $F$). Suppose we want to estimate $V_F(T_n)$ - the variance of $T_n$ given $F$.
The bootstrap approach follows these two steps:
Estimate $V_F(T_n)$ with $V_{\hat{F}}(T_n)$, where $\hat{F}$ is the empirical distribution function.
Approximate $V_{\hat{F}}(T_n)$ using simulation.
Do I understand correctly that the simulation in step 2 could be replaced with a precise calculation, except that it is infeasible for practically useful values of $n$? Here's my thinking: $V_{\hat{F}}$ precisely equals an integral of $T_n(X_1, ..., X_n)d\hat{F}(X_1)d\hat{F}(X_2)...d\hat{F}(X_n)$. $\hat{F}$ is a step-function, with a finite number $n$ steps; so we can ignore all points except the $n$ points where $d\hat{F}(x)$ has non-zero mass. So the integral is precisely equal to a sum of $n^n$ terms. Once $n$ exceeds 14, a simple direct calculation is impossible.
But all we're trying to do is calculate an integral. Why not replace the brute-force bootstrap simulation with any of the traditional numerical algorithms for taking integrals? Wouldn't it result in much higher precision for the same computational time?
Even something as simple as splitting the sample space in sections (perhaps with smaller volumes where sample statistic varies faster), and estimating the value of the statistic in each section by using the middle point, seems to be better than blind bootstrap.
What am I missing?
Perhaps bootstrap works so well and so fast that there's no need to do anything more complicated? (For example, if the loss of precision in step 1 is so much greater than in step 2, then improvements to step 2 are rather useless.)