1

I am in the field of clinical psychology, where statistical significance is commonly said compared to p = .05. For what values of p can I say data APPROACHED significance?

Doug
  • 129
  • 1
  • 2
  • 5
  • I already answered this on Quora :-). – Peter Flom Jun 15 '14 at 12:30
  • 2
    Since you are not using the correct statistical terminology I suggest you study a textbook on statistical inference before proceeding. – Frank Harrell Jun 15 '14 at 12:38
  • 3
    See [here](http://stats.stackexchange.com/questions/21419/what-sense-does-it-make-to-compare-p-values-to-each-other), [here](http://stats.stackexchange.com/questions/70385/can-i-compare-the-p-values-of-two-wilcoxon-tests), [here](http://stats.stackexchange.com/questions/17173/can-you-compare-p-values-of-kolmogorov-smirnov-tests-of-normality-of-two-variabl) and [here](http://stats.stackexchange.com/questions/35433/comparing-p-value-from-t-test-vs-mann-whitney-test). Also [this paper](http://www.stat.columbia.edu/~gelman/research/published/signif4.pdf); all have some relevance to the issue. – Glen_b Jun 15 '14 at 12:39
  • 1
    I'd like to answer: Never. But, alas, thats too short for a proper answer. – Momo Jun 15 '14 at 17:47
  • 7
    Here's a succinct answer accompanied by an illuminating comparative study of how not to do it: http://mchankins.wordpress.com/2013/04/21/still-not-significant-2/ – conjugateprior Jun 15 '14 at 17:54

2 Answers2

5

As already stated in comments: never. If you stick to Neyman-Pearson framework, then before doing your research you decide about some $p$-value threshold (like 0.05, or 0.001) and if get $p$-value that is smaller or equal to the threshold, then you decide to reject the null hypothesis. It doesn't make sense to say that the result is "almost" significant", the same as you don't say that you "almost passed an exam", or "almost survived a shot in a head". On another hand, if you stick to Fisherian framework, then you use $p$-values to measure the evidence against the null hypothesis, so you are not concerned with arbitrary thresholds. In either of the cases, the statement does not make sense.

Similar thing is said on the blog post recommended by conjugateprior in the comment:

You don’t need to play the significance testing game – there are better methods, like quoting the effect size with a confidence interval – but if you do, the rules are simple: the result is either significant or it isn’t.

(Check also the review of how people stretch the language to present insignificant $p$-values, as the results are quite entertaining.)

Glorfindel
  • 700
  • 1
  • 9
  • 18
Tim
  • 108,699
  • 20
  • 212
  • 390
-3

I believe that the convention is that p = .06 is considered "approaching" significance. Keep in mind though that it is not possible to say that it truly is approaching significance. Rather, this should give you reason to perhaps take a closer look at your data. Likely, the recommendation is to collect more data to see if an increase in power will give you a statistically significant result or not.

  • 1
    Sorry but -1, this answer is opinion based and wrong. Why not 0.055 or 0.0634 ? If you stick to [Neyman-Pearson](https://stats.stackexchange.com/questions/23142/when-to-use-fisher-and-neyman-pearson-framework) framework, then it is either significant, or not. – Tim Aug 09 '17 at 13:10
  • 3
    I agree with @Tim. The 0.05 cut-off is completely arbitrary and this system of using a specific p-value cut-off for all hypothesis tests is a flawed system. Nothing magical happens all of a sudden when you reach 0.05, the p-value is a continuous measure of strength of evidence. The threshold should not be the same for all scenarios; that would be saying that false positives and false negatives have consequences of exact magnitude in all scenarios, which is clearly not the case. – dwhdai Aug 09 '17 at 13:49