I have two predictors in a binary logistic regression model, where both variables are on the same scale (standardized). I find it puzzling that the variable with a smaller OR is significant at p <. 05 (OR=1.3) when the one on the same scale and has a larger OR=2 is NOT significant in the same model.
More importantly, when the two variables are tested in separate models, the variable with the larger OR also shows a higher probability estimate for the outcome variable (75%) (vs. smaller: 60%)
Could someone please explain how/why this might be or if this could be diagnostic of a problem (multicollinearity?)
Results
f=with(data=imp, glm(Y~X1+X2, family=binomial(link="logit")))
s01=summary(pool(f1))
s01
est se t df Pr(>|t|)
(Intercept) -1.7805826 0.1857663 -9.585070 391.0135 0.00000000
X1 0.2662796 0.1308970 2.034268 390.4602 0.04259997
X2 0.6757952 0.3869652 1.746398 395.6098 0.08151794
cbind(exp(s01[, c("est", "lo 95", "hi 95")]), pval=s01[, "Pr(>|t|)"])
est lo 95 hi 95 pval
(Intercept) 0.1685399 0.1169734 0.2428389 0.00000000
X1 1.3051000 1.0089684 1.6881459 0.04259997
X2 1.9655955 0.9185398 4.2062035 0.08151794
- As you can see, s01 estimates are log odds ratios.
logOR=log(1.9655955)
logOR [1] 0.6757953
Despite X2 having larger OR and log odds ratio, it is not significant in the model.
Update
One possibility I have not tried is testing the mean difference between the two OR (or log OR?) using the following method: Statistical test for difference between two odds ratios?
I am curious if testing the difference between the two log OR might change the conclusions and would appreciate suggestions on how I might do so using the output above (getting the standard error of logOR for the two coefficients).