On this site it has been confirmed multiple times that, contrary to what is often heard, hypothesis tests don't have any issues with large sample sizes. As a matter of fact, the probability of Type 1 errors when the null hypothesis is true, doesn't depend on the sample size (see for example here). However, people are often taught that to perform some inference procedures (ANOVA, inference for linear regression, etc.), they need to first check the validity of the underlying assumptions (for example, errors are normally distributed, etc.) using an hypothesis tests (for example, a normality test on the residuals of the linear regression). The unlucky disciple tests for normality on a 10^7 sample, finds that the test rejects the null, and she/he falls into despair. I think this is the point which generates confusion.
How would you assess the validity of the assumptions behind the inference procedure, without an hypothesis test? If this question is too general, let's just consider the two cases I cited (ANOVA and inference for the coefficients of a linear regression model). I've been advised to make Q-Q plots. They are great, but in some cases their interpretation can be a bit subjective...I'd rather look for a tool that let me estimate by how much to inflate the C.I.s for the $\hat{\beta}_i$ if residuals "don't look normal"...bootstrap, maybe?
Also, I have another doubt: if at large sample sizes we say that an hypothesis test is not the right tool to check the validity of the assumptions underlying a certain inference procedure, but we also say that sample size doesn't affect the reliability of NHST, then this would mean that hypothesis tests are never (no matter what the size of the sample be) appropriate tools to verify inference assumptions...is that correct?