To understand why either method can be more or less suitable for a problem, let's consider how they work:
Bootstrap
Sample $B$ times with replacement from your original sample. Calculate the statistic of interest on each bootstrap sample and estimate the standard deviation of the statistic across bootstrap samples as an approximation for the standard error of the test statistic. Typically, $B = 1,000$ or even $10,000$. (In their new book however, Efron and Hastie argue that for standard errors, as little as $B = 200$ should suffice.)$^{[1]}$
Jackknife
The simplest jackknife uses a resampling scheme where you leave out $1$ observation at a time and end up with $n$ subsamples, each of size $n - 1$. Then you proceed the same way you would with bootstrapping: Calculate the statistic of interest on each subsample and use these to obtain an approximation of the standard error.$^\dagger$ Typically this only requires $n$ subsamples, although the delete-$d$ version of the jackknife can grow rather large.
The Difference for Complex Designs
Here's the crux: Sampling with replacement from your original sample (i.e. bootstrap) leaves out an average of $e^{-1} \cdot 100\% \approx 36.7\%$ of your original sample and introduces exact duplicates in the subsamples. In contrast, the jackknife approach only 'costs' you $1$ observation that is left out in each subsample.$\ddagger$
In complex cases like estimating variance components in nested mixed effects models, surely leaving out a single observation at a time leads to problems less frequently than taking random samples with replacement.
- Large imbalances can occur due to leaving out over a third of your (presumably balanced) design;
- Random effect categories with just a single observation are more likely to occur;
- Random effect categories with just one unique, repeated observation can occur.
Overall this means that certain variance components may not be possible to estimate at all and convergence problems are almost certain to occur in at least some of your bootstrap samples.
Efron and Hastie$^{[1]}$ call this behavior of the bootstrap "shaking the data more violently", and while it can indeed be problematic for complex hierarchical designs, it is not without advantages either: The jackknife standard error is known to be positively biased, especially when the function of interest is not smooth. Bootstrapping on the other hand, does not depend on local derivatives and works just fine, even when the function is not smooth.
$\dagger$: The jackknife standard error is given by: $\sqrt{\frac{n - 1}{n} \bigg(\hat{\theta}_i - \bar{\hat{\theta}} \bigg)^2}$
$\ddagger$: Leave-$d$-out would of course cost you more, but still probably less than the bootstrap, lest you end up with subsamples of a rather different sample size than the original sample.
$[1]$: Efron, Bradley, and Trevor Hastie. Computer age statistical inference: algorithms, evidence, and data science. New York, NY: Cambridge University Press, 2016. Print.