I am training a random forest model using the sk-learn library, for a binary classification task. For some reason, when I set the max_depth parameter to 1, the model has an average 90% accuracy on predicting positive labels (sensitivity), but only around 30% when predicting negative class labels (specificity). When I increase max_depth, these two (sensitivity and specificity) begin to even out. I am unsure of the cause behind the skewed sensitivity, does anyone know of a possible explanation?
Note: My train and test data sets both have relatively even number of positive and negative examples