2

Will the significance and power levels hold constant if I don't see statistically significant results and decide to wait to allow for more samples to be collected?

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
ThomasO
  • 23
  • 2
  • @Dave's answer has covered why what you have proposed is dangerous under a classical null hypothesis significance test setting. You might want to consider employing sequential testing or Bayesian testing (e.g. via Bayes factor) techniques to get round this problem _before you start your experiment_. – B.Liu Jul 23 '21 at 19:11
  • look at this tag [tag:sequential-analysis] – kjetil b halvorsen Jul 23 '21 at 19:11
  • Also see https://stats.stackexchange.com/questions/20676/why-is-running-split-tests-until-statistically-significant-a-bad-thing-or-is/20677#20677. – whuber Jul 23 '21 at 19:42
  • The answer here and at the duplicate address $\alpha$ but not power. Generally speaking, increasing $\alpha$ will move the whole power curve with it (that is, increasing $\alpha$ increases power). – Glen_b Jul 24 '21 at 10:34

1 Answers1

3

No! This is a dangerous practice. In effect, you’re playing something like the following game.

Flip a coin, betting \$1000 on heads. It lands on tails (no rejection), a result you do not like. Now bet \$2000 on the next flip. It comes up heads, you collect your money, you walk away from the game, and you declare yourself a master coin-flip player since you won a bunch of money.

You can simulate this with a true null hypothesis to see how your type I error rate inflates. Just use some conditional (if/else) logic in a loop to keep adding observations to your sample until you either achieve a rejection ($p\le\alpha$) or reach some absolute maximum sample size. You will see a high error rate, much higher than the $\alpha$-level you deem acceptable.

Dave
  • 28,473
  • 4
  • 52
  • 104
  • True, but you leave out some other very important considerations. First, more observations allow a better estimation of the parameters of population or model. Second, even within the hypothesis testing framework an increase in sample will bring an increase in power along with the increased risk of a false positive. Third, the idea of false positive errors is only relevant in a subset of inferential approaches and settings. To simply say "don't do it" hides relevant factors. – Michael Lew Jul 23 '21 at 21:51
  • 1
    It’s always easy to increase power at the expense of false positives…just set $\alpha=1!$ – Dave Jul 23 '21 at 22:04
  • Another trivialising response... – Michael Lew Jul 23 '21 at 22:28