Conventional logistic regression expresses log odds
as a linear expression of predictors, i.e., $\beta'x$. And the problem reduces to using linear classifier $\beta'x=0$ to classify. But sometimes the training set is apparently nonlinear (may be tested using linear programming), for example, if I even visualize two almost separable circles are the two classes in a two-dimensional training set, should I just use something like $\beta_1x_1^2+\beta_2x_2^2+\beta_3$ to express log odds
? Is this arbitrary enough since the functional form is based on my visualization? But on the other hand, isn't conventional logistic regression also quite arbitrary since it visualizes linearity using techniques such as linear programming?
Do people in practice attempt to fit in any nonlinear functions for logistic regression?