Assume that two drugs were tested. The risk of death for drug 1 is $p_1$ and the risk for drug 2 is $p_2$. We define:
- Risk difference (RD) $RD=p_1-p_2$
- The number needed to treat (NNT) $NNT=1/|RD|$
If we know the estimated RD as RD* and its standard error as se(RD*), what is the 95% CI for NNT? I can think of two methods for solving this problem. Which one is correct and why?
We first construct the 95% CI for RD, and then obtain 95% CI for NNT by inverting the CI for RD, that is:
step 1: 95% CI for RD: $RD^* \pm 1.96* se(RD^*)$
step 2: 95% CI for NNT: $1/(RD^* \pm 1.96* se(RD^*))$
result: $(7.5, 149)$We first derive se(NNT*) from se(RD*) by the Delta method, and then calculate the 95% CI for NNT by:
step 1: $se(NNT^*)= se(RD^*)~|~d(NNT^*)/d(RD^*))$
step 2: $NNT^* \pm 1.96* se(NNT^*)$
result: $(1.38, 27.07)$
Obviously the two results are quite different. What is the problem in these two methods?