In data analysis, one usually need to verify that data have property $X$ before applying method $Y$, which takes $X$ as a prerequisite. To illustrate, possible values of $(X, Y)$ include $(\text{homogeneity of variance and normality}, \text{$t$-test})$, $(\text{independence}, \text{ANOVA})$, and $(\text{stability}, \text{ARIMA-based inference})$.
You must answer the following question before proceeding: do the data deviate enough from the ideal condition where property $X$ perfectly holds to "forbid" use of method $Y$? As far as I know, this question is usually addressed by performing a hypothesis test. For example, a normality test is conducted, and if we cannot reject normality, we suppose the answer is "No" and readily apply a $t$-test.
It appears that this works, thanks to the robustness of the $t$-test. However, this answer points out that hypothesis answers a different question than what we really care, i.e. is there convincing evidence of any deviation from the property $X$? The answer is almost always "Yes" if your dataset is big enough.
My question is, do all methods have "robustness" to some extent? If not, why can we verify that data have property $X$ with hypothesis testing? To paraphrase, does $p > \alpha$ when testing for $X$ always implies the applicability of method $Y$?