Does anyone know of good robust algos to estimate partial derivatives of a regression model? I am talking about a general regression model like this:
$\mathbb{E}(y|x_1, x_2, ... x_n) = f(x_1, x_2, ... x_n)$, where I want estimates of $\frac{\partial f}{\partial x_k}$.
These are of great importance in medicine, economics, social sciences, etc. Now a linear model would give an approximation of these partial derivatives (its parameters) around the mean point.
This approximation can be made valid further around the mean by the inclusion of polynomial terms, but the exponential increase in numbers of parameters as the polynomial degree rises makes this infeasible with a linear model (except perhaps with Lasso?).
For better local estimates over the whole input space, natural candidates would be kernel regression and feedforward neural networks, which have smooth forms (unlike tree-based methods) and from which partial derivatives are easy to recover. But I don't know about the robustness of these estimates, as the main aim of these models is to reduce prediction error, not find the right derivatives...
I know gradient boosting (and really all regression models) give a "partial dependance", but they are not partial derivatives: they are a function $f_k(x_k)$ that gives the mean value of the predicted regression function $\widehat{f}$ for a given value of $x_k$, averaging over the realisations of the other variables $(x_1, x_2 ... x_n)$ in the training dataset.
Any help would be greatly appreciated!