11

If you choose to analyse a pre-post treatment-control design with a continuous dependent variable using a mixed ANOVA, there are various ways of quantifying the effect of being in the treatment group. The interaction effect is one main option.

In general, I particularly like Cohen's d type measures (i.e., ${\frac{\mu_1 - \mu_2}{\sigma}}$). I don't like variance explained measures because results vary based on irrelevant factors such as relative sample sizes of groups.

Thus, I was thinking I could quantify the effect as follows

  • $\Delta\mu_c = \mu_{c2} - \mu_{c1}$
  • $\Delta\mu_t = \mu_{t2} - \mu_{t1}$
  • Thus, the effect size could be defined as $\frac{\Delta\mu_t - \Delta\mu_c}{\sigma}$

where $c$ refers to control, $t$ to treatment, and 1 and 2 to pre and post respectively. $\sigma$ could be the pooled standard deviation at time 1.

Questions:

  • Is it appropriate to label this effect size measure d?
  • Does this approach seem reasonable?
  • What is standard practice for effect size measures for such designs?
Jeromy Anglim
  • 42,044
  • 23
  • 146
  • 250
  • And I think one could put it clear by noting (between), so people would know it's an experimental-control effect size. Because there is within-group effect size too. FYI. Good luck! –  Apr 19 '11 at 02:15

2 Answers2

7

Yes, what you are suggesting is exactly what has been suggested in the literature. See, for example: Morris, S. B. (2008). Estimating effect sizes from pretest-posttest-control group designs. Organizational Research Methods, 11(2), 364-386 (link, but unfortunately, no free access). The article also describes different methods for estimating this effect size measure. You can use the letter "d" to denote the effect size, but you should definitely provide an explanation of what you calculated (otherwise, readers will probably assume that you calculated the standardized mean difference only for the post-test scores).

Wolfgang
  • 15,542
  • 1
  • 47
  • 74
  • Thanks. the Scott B. Morris' article is just what I was looking for. And yes, I agree that I should provide an explanation of the calculation (perhaps I call it something like $\hat{d}$). – Jeromy Anglim Nov 26 '10 at 03:01
  • @Wolfgang Do you maybe know how to calculate CI for this estimate? I asked a question about it [here](https://stats.stackexchange.com/questions/498971/confidence-intervalls-of-cohens-d-in-pre-post-design-with-treatment-and-control) –  Dec 02 '20 at 10:15
3

I believe that generalized eta-square (Olejnik & Algena, 2003; Bakeman, 2005) provides a reasonable solution to the quantification of effect size that generalizes across between-Ss and within-Ss designs. If I read those references correctly, generalized eta-square should also generalize across sample sizes.

Generalized eta-square is automatically computed by the ezANOVA() function in the ez package for R.

Mike Lawrence
  • 12,691
  • 8
  • 40
  • 65
  • 1
    Thanks for the references and r function. I still prefer the interpretation of d-based measures (where they apply) over variance-explained-based measures. I find it clearer to think of the effect of an intervention in terms of a difference score. – Jeromy Anglim Nov 25 '10 at 04:40