I use boosting tree to make prediction for the stock direction, and it is a binary class classification.
The majority class is the down direction, and the minority class is the up direction. The boosting tree can make a better prediction for the majority class compared with minority class, or you can regard that the tree somewhat overfits to the majority class.
So I leave the test set as the same, while impose a bigger weight for the training loss calculation in the minority class of the training set. Finally, I observe that the whole loss of the test set can increase only a bit or decrease a lot, which depends on the weight imposed in the training loss. BUT, the worst thing is that the accuray of the majority class always decreases.
My question is that how can I improve the accuracy for both the majority class and minority class when using imbalance learning techs (some other imbalance learning methods can also be considered), Is it possible?