2

I have results of a randomized trial where 200 participants were randomly allocated to 2 intervention groups and outcome evaluated 1 month later.

I believe baseline differences between 2 groups are now not subjected to hypothesis tests. However, I find that one of the baseline variables is clearly different in 2 groups and this difference gives P value <0.001 on hypothesis testing. Moreover, this baseline parameter may influence the outcome parameter, hence all observed difference in outcome may not be just due to different interventions tested.

How do I handle this? Should I just ignore baseline difference? Should I use this baseline parameter as a covariate in outcome analysis? Thanks for your insight.

rnso
  • 8,893
  • 14
  • 50
  • 94
  • You should first look at the randomisation process, because the result you report would be very very unusual if the participants were really randomly allocated to the groups. A good way to examine the randomisation process is to write it down step by step and check whether the procedure was followed in all cases. – Michael Lew Sep 05 '20 at 21:32
  • Ok, thanks. So it may show that randomization process was faulty or proper randomization did not succeed. But how do we now analyze these data to reach reliable conclusions? – rnso Sep 06 '20 at 17:48

1 Answers1

2

As @MichaelLew notes, there could have been a failure of randomization. If the interventions have not been applied, you could try rerandomizing. Assuming it wasn't, bear in mind that this does (and should) happen all the time, especially if a large number of baseline variables are checked (it can be common to check dozens, and we would expect 1 in 20 to be 'significant'). There are two different things to bear in mind:

  1. Since the groups were formed by randomizing, it is nonsensical to test for differences. Hypothesis tests try to ascertain if the groups came from different populations (viz., populations with different means). But we know a-priori that they came from the same population (those who consented into the study) because we assigned them to the groups from the same initial pool.

    That doesn't mean it's bad to check for covariate balance, only to test for it. As I said, covariates are uncorrelated with your intervention in the population, but not in your sample. So don't look at p-values, look at measures of effect size. In a small study, you could (by chance alone) have a large and important covariate imbalance, without it being significant. Likewise, in a large study the random deviations from pure balance will result in some type I errors that are trivial and ignorable.

    After computing effect sizes, I try to consult with the PI's on the study to see if any of the imbalances would be considered meaningful. These could be variables with a known causal relationship with the response, for example. In which case, they may care about small differences.

  2. Let's imagine then, that you have covariate imbalance that is large and/or meaningful. (This seems to be the crux of your question.) In that case, you just control for those variables in the final model. Simply including them as covariates is sufficient.

gung - Reinstate Monica
  • 132,789
  • 81
  • 357
  • 650
  • Thanks for a detailed reply. There could be several reasons for failure of randomization. Hence, should "checking that randomization was proper" not be an integral step in the analysis of data from a trial? What could be the steps for such a checking procedure? – rnso Sep 06 '20 at 18:21
  • You recommend to see not p-values but effect sizes. But, are p-values not a reflection of effect size in relation to number of subjects? – rnso Sep 06 '20 at 18:42
  • @rnso, sure, you can check to see if there was a failure of randomization. You're comment (now deleted, but current when I answered), suggested we assume the randomization was fine & address the existence of covariate imbalance. Moreover, 'how to determine if there was a failure of randomization?' should be asked as a new question. – gung - Reinstate Monica Sep 06 '20 at 18:49
  • 1
    Yes, *within the same sample & N*, & *within the same test*, the different p-values will correspond to different effect sizes, such that you can conclude the covariates with lower p-values have larger ESs. However: 1) that is not true b/t studies, so you wouldn't want to use the same alpha from one study to the next; 2) the ES is what you care about, & it isn't obvious from the p-value; 3) likewise, you don't care about ordering the covariates against each other; 4) the N's aren't necessarily the same, as there can be missing data, or some measurements only taken on a subset; (cont) – gung - Reinstate Monica Sep 06 '20 at 18:54
  • 5) what is a large effect for a difference in proportions isn't necessarily the same as what is a large ES for a difference in means, etc. In the end, you care if covariate imbalances are *large & meaningful*, not if they are 'significant', especially since any significance is by definition a type I error. – gung - Reinstate Monica Sep 06 '20 at 18:57
  • I would note that "checking if randomization was proper" is not achieved by inspection of the baseline values, but by inspection of the actual procedure for randomization. The effectiveness of the randomization in balancing covariates is a different issue. A haphazard approach to allocation is not the same as randomization. – Michael Lew Sep 06 '20 at 21:03
  • @gung-ReinstateMonica Your suggestion that one should care "if covariate imbalances are large & meaningful, not if they are 'significant'" is very appealing. However, major journals have done away with significance testing but have NOT started anything to check "if covariate imbalances are large & meaningful". Would you agree that this should be corrected. – rnso Sep 07 '20 at 02:45
  • Pl see my follow-up question here: https://stats.stackexchange.com/questions/486404/how-to-check-if-randomization-was-proper – rnso Sep 07 '20 at 11:32
  • Yes, some journals have started trying to deal with some of these issues & some haven't. I do think that all journals should do so. – gung - Reinstate Monica Sep 07 '20 at 11:56