3

Let's say that I want to compute the effect size of a certain intervention, for which I have both the raw data and some statistical value that can be transformed to an effect size. For example, if one has the t score and the df value of a t-test, one can transform it into Cohen's d by using the equation

$$d=\frac{t*2}{\sqrt{df}}$$

This can also be done for other statistical values, like for example f statistics and Chi square.

Do I lose any kind of information or precision if I chose to make the conversation from the statistical value rather than from the raw data? That is, will these transformations result in exactly the same effect size as if I would make the computations with the raw data instead, or are some of them merely estimates?

Speldosa
  • 668
  • 1
  • 7
  • 23

1 Answers1

5

Let's take the example of a two-group comparison conducted with the independent samples t-test. The test statistic is given by $$t = \frac{\bar{x}_1 - \bar{x}_2}{s_p \sqrt{\frac{1}{n_1} + \frac{1}{n_2}}},$$ where $\bar{x}_1$ and $\bar{x}_2$ are the means of the two groups, $s_p$ is the pooled standard deviation, and $n_1$ and $n_2$ are the group sizes.

The standardized mean difference (Cohen's d) is given by $$d = \frac{\bar{x}_1 - \bar{x}_2}{s_p}.$$ Therefore, one can compute $d$ from $t$ with $$d = t \sqrt{\frac{1}{n_1} + \frac{1}{n_2}}.$$ So, one will get the exact same value whether one computes $d$ directly from the means and SDs or by converting $t$ to $d$. Therefore, no information or precision is lost when computing $d$ in this manner.

More generally, this will work, as long as you use the correct equation to convert the test statistic to the corresponding effect size measure.

UPDATE: I'll add two more examples to illustrate this point:

  1. Suppose you conduct a one-way ANOVA on a total of $n$ individuals (all groups combined) with $g$ groups. The usual $F$ statistic is then equal to $F = MS_B / MS_E$, where $MS_B$ and $MS_E$ are the between-group (or treatment) mean square and the error mean square, respectively, which in turn are equal to $MS_B = SS_B / (g-1)$ and $MS_E = SS_E / (n-g)$, where $SS_B$ and $SS_E$ are the between-group sum of squares and the error sum of squares.

    The usual effect size measure in this context is "eta-squared" ($\eta^2$), which can be computed with $\eta^2 = SS_B / (SS_B + SS_E)$ (i.e., the between-group sum of squares divided by the total sum of squares). But one can just as easily compute this with $$\eta^2 = \frac{F \times (g-1)}{F \times (g-1) + (n-g)}.$$ The results will be exactly the same.

  2. Suppose you have a $2 \times 2$ table to examine the relationship between two dichotomous variables X and Y of the form:

          |   X    not_X |
    ------+--------------+
    Y     |   a     b    |
    not_Y |   c     d    |
    ------+--------------+
    

    The usual test of independence yields a chi-square value ($\chi^2$) with one degree of freedom. One measure of the strength of the association between the two variables is the phi coefficient. It can be computed with $$\phi = \frac{a*d - b*c}{\sqrt{(a+b)(c+d)(a+c)(b+d)}}.$$ But one can also compute this with $\phi = \sqrt{\chi^2 / n}$, where $n = a+b+c+d$ (assuming that $\chi^2$ was not computed with a continuity correction -- see this question). One caveat: One does not know the correct sign of $\phi$ when it is computed that way (i.e., whether the coefficient was positive or negative). So, to be precise, we get $\sqrt{\chi^2 / n} = |\phi|$.

Wolfgang
  • 15,542
  • 1
  • 47
  • 74
  • "More generally, this will work, as long as you use the correct equation to convert the test statistic to the corresponding effect size measure." - And I guess these equations are derivable for the all the usual suspects when it comes to hypothesis testing, like t-tests (you showed that it was), Z-tests and different types of ANOVA? – Speldosa Apr 02 '14 at 13:09
  • Yes, in principle. Of course, the devil is always in the details and what kind of effect size measure one wants to compute. So, I would not want to make any sweeping generalizations. But I'll add two more examples to my answer. – Wolfgang Apr 02 '14 at 15:31