I would like to test the difference of two odds ratios given the following R-output:
f=with(data=imp, glm(Y~X1+X2, family=binomial(link="logit")))
s01=summary(pool(f1))
s01
est se t df Pr(>|t|) (Intercept) -1.7805826 0.1857663 -9.585070 391.0135 0.00000000 X1 0.2662796 0.1308970 2.034268 390.4602 0.04259997 X2 0.6757952 0.3869652 1.746398 395.6098 0.08151794
cbind(exp(s01[, c("est", "lo 95", "hi 95")]), pval=s01[, "Pr(>|t|)"])
est lo 95 hi 95 pval (Intercept) 0.1685399 0.1169734 0.2428389 0.00000000 X1 1.3051000 1.0089684 1.6881459 0.04259997 X2 1.9655955 0.9185398 4.2062035 0.08151794
To do so, I would need to take the difference of the log odds and obtain the standard error (outlined here: Statistical test for difference between two odds ratios?).
One of the predictor variables is continuous and I am not sure how I could compute the values required for $SE(logOR)$.
Could someone please explain whether the output I have is conducive to this method?