2

I am studying the effect of X on different diseases. The X shows a significantly increased risk for let's say 2 diseases (good to report), but for the rest of the diseases, X shows increased risk but non-significant.

For some diseases, it shows increased risk with a p-value of 0.1 and for some, it shows increased risk with a p-value of 0.7. Should I write in the manuscript that increased risks were found but with non-significant trends? I read somewhere that results until p-value 0.1 can be reported but not above that. is it true?

I will, of course, report the non-significant results in Table but should I write and discuss the nonsignificant results in the discussion section as well? Please extend your suggestions and also share any manuscript that has reported non-significant results.

chl
  • 50,972
  • 18
  • 205
  • 364
  • 3
    I think it makes sense to report the *p* values and effect sizes (amount of increase) for the nonsignificant factors (say, in a table). Often it's difficult to say much about the non-significant factors due to the interpretation of the *p* value: you don't know that this factor *doesn't* have a meaningful effect, you just don't have enough evidence to say that it has a significant effect. However, if the effect size of a factor is large, it warrants future consideration. – Sal Mangiafico Nov 24 '19 at 19:29
  • 5
    You might consider reporting 95% confidence intervals for the effect of X on various diseases. Roughly speaking, CIs for nonsignificant effects will include 0. // The information provided by P-values is often misunderstood, especially by nonstatistical audiences, so you should not rely on P-values alone to explain the importance of your findings. – BruceET Nov 24 '19 at 21:02

1 Answers1

3

You should present all your results of course.

Whether or not results with a high p-value deserves discussion depends on the setup of the experiment/research.

  • Sometimes a research is designed (by the choice in sample size) before observations to ensure that the p-values will be low enough in the case of an observed effect size that would be worthy of discussion. So in that case, if the p-value is too high, then by design of the experiment the effect size is not interesting enough to be discussed.

  • Sometimes a research is underpowered and the sample size is low. This could be for instance the case in a small preliminary experiment (or when the researcher just did not think beforehand what sort of sample size would be necessary in order to make decent conclusions in case of a reasonable large observed effect size).

    It might be that one was hoping to quickly find a large (and significant) effect size. But instead one might find a medium effect size that could still be of interest.

    In such a case it is certainly fine to discuss the result. The p-value should not be the reason to discuss an effect but instead it is the observed effect that is reason for discussion. The high p-value just means that the experiment was not precise/accurate (and that should be discussed as well) and there is uncertainty about the observed effect.

    Even better would be to do follow-up research and publish that. Instead of questioning whether the effect should be discussed it might be questionable whether the experiment should be reported/published. A research should be properly performed and accurate enough to be able to measure effects with some reasonable certainty/accuracy. Only in some ongoing investigation or with a publication style that discusses preliminary results (and more will follow) does it make sense to report about inaccurate results.

    This last point/alinea is a bit problematic because you do get a selection bias with mostly publications of results that are significant and results that are not significant being held back or waiting for validation before publication. And also you get the effect that there can be multiple attempts to research an effect but we only see the one attempt that succeeded (This is why in some fields boundaries for p-values are set lower; like 5-sigma in particle physics but at such low probability your run into other problembilities as well).

Sextus Empiricus
  • 43,080
  • 1
  • 72
  • 161