3

A study I'm familiar with has been encouraged by its review board to take more participants.

This seems a sensible request on general grounds, but the reviewers suggested that it was essential to increase the power of the study or else there would be too high a chance of spuriously finding a statistically significant mediation/moderation effect.

To me this is very counterintuitive, since power refers to the ability to correctly reject a false null hypothesis, and does not speak to what happens in the event the null hypothesis is true. It's also counterintuitive to me since it's not really the kind of situation in which the null hypothesis could be literally true, even if the size of the effect is extremely small.

Were the reviewers correct? There is a related question here, on which no answer was accepted.

  • See also http://stats.stackexchange.com/questions/176384/do-underpowered-studies-have-increased-likelihood-of-false-positives/176390#176390 – Florian Hartig Oct 18 '16 at 08:12

1 Answers1

1

I'm not sure the reviewers have the reasoning right, but here's the issue.

It doesn't matter what the sample size is, the probability of a type I error is the same.

BUT: Studies that have statistically significant results are more likely to be published.

Let's say 100 studies are done, in which the power is low - 10%. So 10 of these studies get a statistically significant result.

And 100 studies are done, in which the null hypothesis is true. Power is therefore 5%. So 5 of these studies find a statistically significant result, and all of these represent a type I error.

All 15 of these studies are published. 10 of them are true effects, 5 of them represent Type I errors. Hence low power has led to one-third of significant results being type I errors.

Jeremy Miles
  • 13,917
  • 6
  • 30
  • 64