0

I have an independent variable "military expenditure" in a logit regression predicting the likelihood of a state participating in a specific military exercise. The distribution of mil. expenditure is strongly right-skewed, which is why I consider log-transforming it. However, when I compared this model:

logit depvar milexp v2 v3 v4

to

logit depvar log_milexp v2 v3 v4

and compared a variety of Pseudo-R² tests using the fitstat command as advised here, the results were suggesting me that the original, untransformed model has an overall better fit.

In addition, I believe that I can theoretically expect that a linear relationship is likely: An increase in milexp from a low value to a low value+1 should raise likelihood just as much as from a high value to a high value+1.

Based on the Pseudo-R²s and on my theoretical expectation, can I reasonably continue with the untransformed, highly skewed independent variable?

  • I think your question stems from the wrong assumption that independent variables must be transformed. There is a relevant post [here](https://stats.stackexchange.com/questions/159256/why-do-we-need-to-log-transform-independent-variable-in-logistic-regression). – Groovy_Worm Apr 17 '17 at 12:09
  • 2
    What theory implies linearity here and on a logit scale too? Not knowing about a theory that implies otherwise is not a theory for the opposite. It's a pity if theory is just an inflated word for guess or speculation. The bigger issue is without access to your data we have no basis for comment. It is entirely possible that the supposedly better fit is a side effect of outliers, for example. – Nick Cox Apr 17 '17 at 12:13
  • @nomoreidols Why would skewness in an independent variable be an issue? – Glen_b Apr 18 '17 at 00:36

0 Answers0