1

I would like to conduct a meta-analysis on a set of studies that have measured an outcome over a period prior to intervention and post-intervention for a control group and experimental group. A method of doing this for quantitative data was demonstrated by Morris (2008) and can be conducted in the Metafor package in R.

The only problem with Morris's proposed effect size is that it seems to be designed around the standardised mean change, which is likely not appropriate for count data. I understand that I can use the incident rate ratio (IRR) or the rate difference for count data to look at the differences between two groups. But, what if like Morris, I want to look at differences pre and post for my treatment group compared with differences pre and post for my control.

One option would be to calculate rate differences between pre and post for both groups and then to subtract one from the other. But, I am not sure whether the likely correlation between the pre and post results would make my effect size inaccurate. Another would be to potentially use the control group rate difference as a moderator, but this seems to have the same problem.

Does anyone have any suggestions on how to deal with this issue?

John
  • 21
  • 2
  • "whether the likely correlation between the pre and post results would make my effect size " How do I estimate the correlti between pre and post ? Your is a count data ? –  Jun 14 '19 at 16:24
  • It is. Why can't I estimate correlations with count data? This is done pretty regularly...see https://stats.stackexchange.com/questions/276360/testing-for-correlation-with-count-data – John Jun 14 '19 at 16:28
  • does not answer my question. And what prompts you to think of correlation between pre and post ? –  Jun 14 '19 at 16:37
  • Accounting for dependence between pre and post measures on the same group. You can estimate the correlation with Pearson's r... – John Jun 14 '19 at 16:39
  • The difference in proportion for the pre and post treatment (exp. design) can be used to estimate the effect-size of a study. Why should we compute and compare it with difference in control group (pre and post treatment). To me, doing so itself results in a true estimate of effectsize which does not serve as a good basis for meta analysis. You need not land into alley of difference being measured on continuous scale. –  Jun 15 '19 at 09:21
  • Using difference in two groups - exp. versus control as a basis for meta analysis too is possible. You may understand that difference(exp) in difference (control) results in elimination of random effects(generated by random factors) such as age, culture,; past etc of subjects and hence no need of meta- analysis. –  Jun 15 '19 at 09:39
  • TO answer your first comment, presumbaly we are interested in differences in the outcome between control and experimental groups accounting for differences that existed at baseline. In quasi-experimental studies (common) in my field, pre-intervention differences between groups often exist. – John Jun 16 '19 at 14:16
  • To answer your second comment, I am not sure how this would result in the elimination of random effects. Aren't the random effects accounting for variance at the study level? If I have multiple studies with control and experimental groups, isn't meta-analysis the best way to summarise the pooled effect across studies? – John Jun 16 '19 at 14:18
  • In case of matched groups, for control and exp. purpose, it is unlikely to face the situation you are speculating. The situation seemingly could arise when we are considering unmatched groups as control and exp. groups.Here, exp. group (model village) and control group - say 2nd village seem to have been subjected to a treatment and you want to test for new or 2nd dose of treatment to exp. group. It is something like this. You may like to give some more information. –  Jun 17 '19 at 01:36

0 Answers0