0

I have 12 concentrations that I would like to compare between 2 groups, some subjects have 2 measures. I know the concentration is also depending on age and sex so I added those to the linear model and included a random intercept to account for repeated measures in some of the subjects.

I have now the following:

conc1 ~ studygroup + time + sex + (1/subject)
conc2 ~ studygroup + time + sex + (1/subject)
conc3 ~ studygroup + time + sex + (1/subject)
...
conc12 ~ studygroup + time + sex + (1/subject)

FDR correction on p-values and CI for beta estimate of studygroup (?)
enter code here
studygroup conc1: estimate, p-value, CI
studygroup conc2: estimate, p-value, CI
studygroup conc3: estimate, p-value, CI
...
studygroup conc12: estimate, p-value, CI

I would like to know how to correct for multiple testing in such a case using FDR. If someone has a reference describing this with some guidance on what to do with p-values and CI, that would be great. Alternatively, resources and thoughts on why not to do this would also be welcome!

I found a post suggesting it is necessary to correct, however I am highly confused at this point.

Do I have to correct for multiple testing?

Should we address multiple comparisons adjustments when using confidence intervals?

CST
  • 159
  • 1
  • 6

1 Answers1

0

I would not recommend to correct for multiple testing in general, here is an article that argue why not. If you insist, I would suggest to present both adjusted an unadjusted p-values. Bonferroni adjustments can easily be applied to p-values to correct for multiple testing.

Kirsten
  • 419
  • 8
  • Thank you, the reason why I thought to correct using FDR is, that it is less strict as Bonferroni. Nevertheless, I would wonder if it is correct to only use the p-values obtained for the studygroup estimate or if this generally not how it is done. Also, if I would like to report the other estimates of the model (e.g. the effect of time), would I also have to correct first? (I thought I would like to report with conc. increased with time and which decreased so I can compare this to existing literature) – CST Jun 17 '21 at 10:48
  • You should correct for all test you wish to interpretant. Is that more what you are looking for? – Kirsten Jun 17 '21 at 10:59
  • I am afraid not really. I was hoping to get a reference or something that would explain how to FDR correct for the scenario where I have multiple models as "tests" and some guidance what to do with the CI. I will update the question so it is more clear! – CST Jun 17 '21 at 12:09
  • I am not sure i understand what you mean with "multiple models as "tests"". In multiple models you do multiple test, for each test you have a specific significance level, $\alpha$. Correction does not depend on the type of tests but on the number of tests. If you do two tests in one multiple model, it would count similar as to making 2 simple tests. No matter which method (to my knowledge), the correction is done after testing. (continued) – Kirsten Jun 17 '21 at 12:27
  • That is you change the limit for accepting your p-value, or recalculate you p-value based on a new significance level ${\alpha}^{\sim}$ that depends on $\alpha$ and the number of tests you wish to preform. If you want to adjust the confidence level, simply use ${\alpha}^{\sim}$ in the calculations. – Kirsten Jun 17 '21 at 12:28
  • The comments above is directly related to the problem with multiple testing, when you correct the significance level (reduce type 1 error) to account for multiple testing you increase the chance of type 2 errors. And you have the problem. – Kirsten Jun 17 '21 at 12:31