Let's say I run the following regression:
fit <- glm(outcome ~ satisfaction, data = df, family = binomial)
------------------------------------------------------------------
exp(Est.) 2.5% 97.5% z val. p
--------------------- ----------- ------- ------- -------- -------
(Intercept) 0.413 0.279 0.611 -4.429 0.000
satisfaction 0.744 0.644 0.860 -4.007 0.000
------------------------------------------------------------------
I am interpreting this to mean that a 1-unit change in satisfaction
is associated with a decrease in the odds of the outcome by (1-.744)*100 = 25.6%. My question is: is this effect constant at every value of X? That is, can you say "for every 1-unit change in satisfaction
the odds decrease by...?" Or is this only true at the mean of satisfaction
?
For comparison:
If we were to compute an average marginal effect (e.g. via the margins command), we get the following:
factor AME
satisfaction -0.03943611
Which I'm interpreting as a 1 unit change in satisfaction (when satisfaction is at its mean) results in a 3.9 percentage point decrease in the probability of the outcome. Notice that I now say "when satisfaction is at its mean", because the probability will be different depending where on the logistic curve you are.
Why do we not say "when satisfaction is at its mean" when interpreting the odds ratio. I understand that the log-odds are linear in X in the logit model, but the odds shouldn't be linear... right?