1

Let us say, I observe 3 variables in a control condition and a treatment condition. I would like to find out whether the treatment has some effect on all 3 variables at the same time. I should mention that I cannot observe the 3 variables simultaneously. In a first experiment, I would measure variable A in control vs. treatment, in a second experiment, I would measure variable B in control vs. treatment and so on.

For each of the variables, the null hypothesis that the treatment has no effect was tested. So, I have 3 p-values, one for each variable. (and I only have access to these data, not the raw data).

I read Test for significant excess of significant p-values across multiple comparisons

Let us see whether I understand this correctly:

  • $H_A$: all $p_i$ have the same (unknown) non--uniform, non--increasing density,
  • this would mean, I test, whether the treatment has the same effect on all 3 variables.

  • $H_B$: at least one $p_i$ has an (unknown) non--uniform, non--increasing density.

    and this would mean, the treatment has an effect on at least one variable.

    Is there also a way, to test whether there is some effect (but not necessarily the same effect) on all of the variables? Intuitively, this should be the case, if all 3 p-values are small and it would not be the case, if the maximum p-value would be large, I would say.

  • Fabian Rost
    • 173
    • 7
    • You lack crucial information: namely, the degree to which the three variables are correlated (conditional on the values of the explanatory variables). Although you can still deduce something about the plausible range of p-values for your simultaneous test of all three variables, that range is large. When all three are strongly correlated, one p-value tells you about as much as all three. When all three are independent, see https://stats.stackexchange.com/questions/20616, https://stats.stackexchange.com/questions/78596, and https://stats.stackexchange.com/questions/66300 for examples. – whuber Jun 28 '21 at 11:02
    • There have been a number of attempts to deal with the case where the individual tests cannot be assumed to be independent. You might be interested in the R package poolr https://cran.r-project.org/package=poolr even if you do not use R. The manual contains a number of references. – mdewey Jun 28 '21 at 14:42
    • @whuber: thanks for your comment. In fact, I do not know whether the variables are correlated. However, the measurement was done such that there are no correlations among the variables (I updated the question accordingly). Thanks for the 3 links! I checked them and they basically refer to Fisher's method and Stoufers' method. Now, according to https://stats.stackexchange.com/questions/171742/test-for-significant-excess-of-significant-p-values-across-multiple-comparisons/246059#246059, these two methods are designed for the alternative hypotheses that I described in my question. – Fabian Rost Jun 29 '21 at 09:14
    • However, I want to know whether all 3 variables change simultaneously (but possibly differently), which, if understand correctly is not captured by the alternative hypotheses specified in my question. – Fabian Rost Jun 29 '21 at 09:14
    • thanks @mdewey! I'll check out the package. – Fabian Rost Jun 29 '21 at 09:15
    • I read the documentation to to `poolr`. If understand correctly, then all functions there test the joint null-hypothesis, i.e. that there is no difference for any of the variables. The joint hypothesis would be rejected if one or more of the variables show a difference. Now, what I want to find out is whether there is a difference in all of the variables, not just in one or more. So, I think I cannot use this. But anyways, thanks for pointing this out, @mdewey~ – Fabian Rost Jun 30 '21 at 08:53

    0 Answers0