Are adaboost and gradient boost models highly correlated? Will including both the above models in ensemble improve accuracy significantly?? Because if the models are uncorrelated in the ensemble they will give a very high accuracy!!
Asked
Active
Viewed 199 times
1 Answers
1
From the name we know Both Adaboost and Gradient Boosting are similar because they are both boosting models, where we add models one by one sequentially on each iteration and increase the model "variance" and reduce the "bias". My answers to this post may be helpful for you to see "boosting step by step". How does linear base leaner works in boosting? And how it works in xgboost library?
The difference between this two is adaboost is using classification trees and re-weight on mis-classified examples. But gradient boosting is using regression trees to fit the log odds.
-
Yeah that's correct. But my question is will it give significant improvements in accuracy if we include both adaboost and xgboost in ensemble?? – mathkid Nov 09 '16 at 17:10
-
2From my limited understanding, on adaboost, it seems it is not "popular" any more after gradient boosting invented. And the gradient boosting is really a general framework of doing boosting. Hard to say if anyone want to combine. – Haitao Du Nov 09 '16 at 17:15