7

Using Cohen's $d$, I am getting small and medium effect sizes for results that are statistically non-significant ($p>.05$). Does this make sense?

Nick Stauner
  • 11,558
  • 5
  • 47
  • 105
Melissa Duncombe
  • 71
  • 1
  • 1
  • 2

3 Answers3

10

Yes, this may completely make sense. In fact, it is also possible (perhaps rarer) to see a large estimated effect size without there being statistically significant evidence it isn't zero.

The issue is that your effect size is just a point estimate and hence is a random variable that depends on the particular sample you have available for analysis. If you construct a 95% confidence interval for your estimate you will see that it includes zero, which is why your p-values are above 0.05.

Peter Ellis
  • 16,522
  • 1
  • 44
  • 82
6

Yes. This basically means that you see a medium (say) effect, but you can't be sure at the 95% level that what you see is not due to a random fluctuation. This probably happens because your sample is too small. You want to have a look at http://en.wikipedia.org/wiki/Statistical_power

(And to all the professionals here: yes, I know this is all very imprecise, vague, and wrong. I'm just trying to match the level of the answer to the level of the question.)

AVB
  • 618
  • 3
  • 11
6

What a p-value is: A p-value answers this question: If, in the population from which this sample is drawn, there was really no effect at all, how likely is a result as extreme or more extreme than the one we got in this sample?

That is all it means.

This question is almost never of interest

Effect sizes (like Cohen's d) are much more important in nearly all cases. They answer the question: How big is the effect? That is what we are usually interested in.

I don't think we should make our answers so simple that they are wrong; I think we can help less-schooled questioners understand what is really going on. And, in this case, I think that can be done fairly easily.

Peter Flom
  • 94,055
  • 35
  • 143
  • 276
  • 3
    Except your own answer misses out the most important issue, which is that the "effect size" is only an *estimate* of the true effect size. The estimate is so uncertain that the true value might easily be as small as zero. Estimated effect sizes are very dangerous if not reported with some indicator of the uncertainty associated. I agree a p-value is a very poor indicator and much better is a confidence interval, but I would hate to see someone naively reporting the estimated effect size on the basis of your answer that it is more important than the p-value. – Peter Ellis Mar 07 '12 at 06:31
  • 1
    Fair enough, @PeterEllis . – Peter Flom Mar 07 '12 at 11:03
  • 2
    I was probably too vehement in that comment too :( – Peter Ellis Mar 07 '12 at 12:31