Imagine that I have data array $X_i$ (n-dimensional vectors) $Y_i$ (scalars) and classes for each point $C_i$(either zero or one). $i=1..N$
I am trying to train a binary classifier for this data where I want the decision boundary to be specifically $y>f(x)$ (where f is some flexible enough function).
I know that I can train a linear SVM on the $X_i$ vectors concatenated with $Y_i$ and that will give me the $y>Linear(x)$ decision boundary, but that's unfortunately too simple of a decision boundary for me. I also know that I can manually transform $X_i$ into higher dimensional space with polynomials and still do linear SVM, but I'd rather avoid SVM. I was thinking of a way of using random forests, decision trees for this, but couldn't come up with a way that will provide me with the required form of a boundary. Does anyone have suggestions of how can reformulate my problem or which classifier to use ?