If I have
a set RF
of Decision Trees trained using a Random Forest algorithm and
a set AB
of Decision Trees (Stumps), trained using Adaboost
do I see it correctly that I can uniformly implement both Ensemble methods, by something along the line of
double ensemble(Trees, weights) {
dec = 0
for T in Trees:
dec += weights[T] * T.getDecision() //-1 for False, +1 for True
return dec
}
and call it for the Random Forest using
ensemble(RF, [1,1,...])
and for Adaboost using
ensemble(RF, w)
where w
is the weight vector obtained form the Adaboost training.
I.e., after training the only difference between Adaboost and RandomForest is that Adaboost uses a weighted sum of the decision but RandomForest doesn't?