All other things being equal is there ever a time when a t-test would be preferable to a permutation test? Computational time may be one reason, but for our purposes assume it does not matter.
Isn't permutation test a superior option in this case?
All other things being equal is there ever a time when a t-test would be preferable to a permutation test? Computational time may be one reason, but for our purposes assume it does not matter.
Isn't permutation test a superior option in this case?
I am a fan of permutation tests in general, but there are some important considerations to their construction that could be interpreted as disadvantages. They have benefits, too, though, that make me come down in favor of permutation tests in the vast majority of situations.
Benefits
For designed experiments, you can always construct a permutation test that matches your experimental design. There are a number of papers about this, but one of my favorites is Permutation Tests for Multi-Factorial Analysis of Variance by Marti Anderson.
When constructed properly, the permutation test does not really rely on unverifiable assumptions about your data. This is discussed at length by Tukey in Tightening the Clinical Trial which is worth a read if you are interested in non-parametric analyses.
These two benefits are worth a lot - the p-values (and confidence intervals) that are generated by a permutation test can justified with solely the construction of the test. Moreover, the main benefit to using an asymptotic test like the t-test is that it is much easier to compute, but asymptotic tests can never be more exact than the permutation test. Therefore, if computational power is not limiting, you should go for the permutation test.
That's not to say permutation tests are perfect, however.
Drawbacks
Permutation tests are inefficient compared to asymptotic tests. When you have only a few observations, it might be impossible to control alpha at the level you want and still do a permutation test. This can be mitigated by collecting more data, however.
Choice of test statistic is a much more important drawback to keep in mind. As a general rule of thumb, for any given metric $\beta$, you should use $\beta /s.e.(\beta)$ as the test statistic. Permutation tests tend to be sensitive to differences in distributions rather than differences in parameter, but using a pivotal (or approximately pivotal) test statistic fixes this problem. One handy way of doing that is just dividing your comparison of interest by an estimate of its standard error. This is discussed by Chung and Romano in Exact and asymptotically robust permutation tests. This does not completely fix the issue - comparing asymmetric distributions with massive heteroscedasticity is still difficult, but it's also quite difficult to do with asymptotic tests.
On the whole, I come down on the side of the permutation test for two main reasons: 1. you can analyze any experimental design with a permutation test (though not necessarily an exact one) and 2. its drawbacks can be mitigated.