0

We use regularized Linear Regression to prevent the model from overfitting (reduce model complexity).

Does the same idea hold with regularized Logistic Regression?

Is regularized Logistic Regression a solution to the problem of separation? if yes, how?

I am sure I had some misunderstanding, can anyone help to clarify that for me.

jeza
  • 1,527
  • 2
  • 16
  • 37

1 Answers1

8

We use regularized Linear Regression to prevent the model from overfitting (reduce model complexity). Does the same idea hold with regularized Logistic Regression?

Yes. The bias-variance trade-off exists in all areas of statistics.

Is regularized Logistic Regression a solution to the problem of separation?

Yes; even a small penalty on the coefficients will bound them away from infinity. This is because you will not be able to improve model fit to be arbitrarily good without eventually trading-off with an increase in penalty on coefficients.

See also: How to deal with perfect separation in logistic regression?

Sycorax
  • 76,417
  • 20
  • 189
  • 313
  • how does the regularized Logistic Regression solve the problem of separation? – jeza May 07 '19 at 03:27
  • What does it mean to be "solved"? What part of my explanation is unclear? – Sycorax May 07 '19 at 03:37
  • you said yes regularized Logistic Regression is a solution to the problem of separation, how does this work, how this regularized regression solves this problem. – jeza May 07 '19 at 03:39
  • Did you read the next sentence? – Sycorax May 07 '19 at 03:51
  • yes I did read it.. – jeza May 07 '19 at 03:57
  • Write down the logistic regression loss with an $L^2$ penalty. What happens as the coefficients get arbitrarily large in absolute value? – Sycorax May 07 '19 at 04:19
  • @jeza https://stats.stackexchange.com/questions/401212/showing-the-equivalence-between-the-l-2-norm-regularized-regression-and could be helpful – Sycorax May 07 '19 at 04:26