You bring a good point. For some fields - such as neurobiology - statistical methods for hypothesis testing are often just supplementary to the purpose because they're based on causal data with causal manipulations. Assumptions are frequently broken, but the difference between groups is typically clear-cut and the results of the statistical tests reflect that. This is one reason why fields that use animal models can "get away with" using small samples.
But for most cases, you are not looking at dominantly causal designs and that approach is not recommended. Even in the case of a causal experimental design, the t-test is not redundant. It can help in understanding how your manipulation worked.
Building off of that and Harvey's point, I think that it would also be good to consider it in terms of the ability to measure how much a dependent variable is affected by the independent variable. A t-test doesn't just tell of a "significant difference" measured by the p-value. It is often more useful in using statistical tests to measure the extent to which your experimental manipulation (IV) explains the change in the DV, as measured by the partial eta squared value.
For a t-test example: hospital employees are randomly assigned to have a socialization break period (experimental group) or not have this break (control group) to see if and how it affects employee satisfaction ratings. The individual partial eta squared value for "break or no break" will represent the extent to which this break period explains the variance in satisfaction ratings. So a partial eta squared value of 0.137 would tell you that the break period can explain about 13.7% of the variance in our employee satisfaction ratings.
Hopefully that makes sense. I think that this StackExchange answer helps. It is not on exclusively t-tests, but the idea is pretty much the same.