We know that underpowered statistics greatly increase the probability of a type II error (by definition), meaning a greater chance of failing to reject the null hypothesis despite the existence of a 'true' underlying effect.
The dangers of use of underpowered statistics are frequently taught to students in the context of the commonly used inferential statistics (i.e. ANOVA and t-test). However, I have never heard mention (including in that of statistics textbooks) of the importance of statistical power in relation to the supportive tests, such as those used to check for homogeneity of variances and normality of distribution.
I would argue that the dangers are even more apparent for these supportive tests, because there is a temptation to see a 'non-significant' p>.05 result and think "oh great, it's passed this step in the process!" and assume normality and homoscedasticity. However, surely if our Levene's test is underpowered, there will be a high probability of obtaining a 'non-significant' result even if, for example, the true variances are not equal.
In short, if our supportive statistics are underpowered, we cannot assume that the assumptions for ANOVA, t-tests, and others have been met solely on the basis of a 'non-significant' result.
My questions are: - Am I correct in assuming statistical power is important for the supportive tests?
- What should we do about it? Never perform statistics on low samples? Cross-reference non-significant results on supportive tests with plots and visuals?