In case of a regression we can apply a boosting approach as follows:
- Train a very simple model using the data set.
- Find a difference between the predictions and targets and use this difference as a new target.
- Train a new model using a new target.
- Repeat it as long as it make sense.
Can this idea be generalised on the case of a classification?
For example, we train a very simple model to predict probabilities of two exclusive classes. Now, we need some how define a difference between the targets and predictions. However, it is not straightforward anymore. Is there a way to do it?