0

I am not able to find this specific answer anywhere:

If I ask participants questions about different topics and then test hypothesis for each of these questions, do I have to correct for multiple testing? (i.e. are the tests independent because different topics or are they dependent because same participants?)

Onerock
  • 21
  • 6
  • There is no situation in which you have to correct for multiple testing - see https://stats.stackexchange.com/questions/120362/whats-wrong-with-bonferroni-adjustments/. – fblundun Apr 14 '21 at 07:49
  • @fblundun dead salmon from the dead salmon study would disagree – rep_ho Apr 14 '21 at 07:58
  • But what is the most sensible way if I want to publish? So my approach would be to present the individual p-values (etc.) in the results section and not talk about alpha. However, in the methodology I would state my initial global alpha 0.05 and come back to it in the discussion (where I should probably say something about multiple correction) – Onerock Apr 14 '21 at 08:15
  • @rep_ho imagine that [the researchers to whom you refer](http://bytesizebio.net/2010/10/27/but-did-you-correct-your-results-using-a-dead-salmon/) had randomly picked a single voxel and tested the claim that it recognizes human emotion. If they happened to find significant (p<0.05) evidence of activity, we still wouldn't accept their hypothesis, even though there's no multiplicity involved. So the justification for rejecting the results of their real-life study must be something other than the necessity of multiplicity correction. – fblundun Apr 14 '21 at 08:39
  • They didn't test one voxel, but all brain voxels in the image as is usually done, so that's where the multiplicity is coming from. – rep_ho Apr 14 '21 at 08:49
  • @rep_ho I know, my point is that if your reason for rejecting the study is lack of multiplicity correction, you then have to come up with an entirely separate argument why you would have rejected the study if it had only considered a single voxel - and whatever that new argument is, it should also apply to the multi-voxel study that was actually carried out. – fblundun Apr 14 '21 at 08:51
  • @fblundun so what would you do with salmon's brain? – rep_ho Apr 14 '21 at 08:55
  • @rep_ho if you're asking why I wouldn't believe that a particular voxel of a dead salmon brain lights up in response to human emotion, I think my answer is: this hypothesis is so incompatible with my understanding of biology that it would take more than a p-value of 0.001 to take it seriously. – fblundun Apr 14 '21 at 09:49
  • You don't need an implausible experiment to justify Bonferroni. Consider a GWAS study where there is doubt about any association whatsoever. Think of tests of efficient markets in finance. Think of tests for adverse events in clinical trials. In any case where there is sparsity, Bonferroni looks ok. FDR corrections look like Bonferroni in that case. Bayesian posterior probabilities even look like Bonferroni in that case. Sure, there are better things one can do, but it seems silly to dismiss it out of hand. – BigBendRegion Apr 15 '21 at 01:09

0 Answers0