3

I've been doing quite a bit of reading on the topic of non-normality, and how it pertains to the F-test. My understanding is that non-normality tests aren't very useful, as outlined in the arguments in this Cross Validated Thread.

However, the following question still stands: if the F-test isn't robust against non-normal data, what should a practitioner do? Upon researching this concern, I've found many resources (for example, this GraphPad guide that essentially advise the practitioner to keep using the standard tests (i.e., T-test, ANOVA, etc.) if the data is "only approximately Gaussian." OK, but at what point do we step away from, say, the F-test, and use a non-parametric test? When is the non-normality too much?

daOnlyBG
  • 388
  • 3
  • 18
  • Are you asking about normal distributions of the variables themselves, or of the residuals after fitting a model? If you have a particular experimental design in mind then perhaps a more specific answer can be supplied; too often the answer to a general question is: "It depends." – EdM Sep 04 '15 at 19:06
  • @EdM- sorry for not mentioning it sooner; the variables themselves. – daOnlyBG Sep 04 '15 at 19:19
  • 4
    The amount of non-normality that can destroy the properties of parametric tests is surprisingly small in some settings. But which $F$ test are you referring to? If a test for equality of variances you might also entertain the use of more robust measures of dispersion such as Gini's mean difference. – Frank Harrell Sep 04 '15 at 19:33
  • @FrankHarrell yes, I am referring to the F-test that compares variances. – daOnlyBG Sep 04 '15 at 19:43
  • 1
    See if bootstrapping the difference in two Gini mean differences is a good idea. – Frank Harrell Sep 04 '15 at 22:22

0 Answers0