I think what you are asking is doable but it beats the purpose of having a random forest. It is an ensemble model where results from multiple weak estimators are used for coming up with a strong estimator
However, if you want to go ahead and do it, you can do it in the following manner
- Choose a metric that should be used for evaluating the individual decision trees
- Run that metric on the same dataset for all the decision trees and find the one with the best metric
from sklearn.metrics import accuracy_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=1000, n_features=4,n_informative=2, n_redundant=0,random_state=0, shuffle=False)
n_estimators=100
clf = RandomForestClassifier(n_estimators=n_estimators, max_depth=2,random_state=0)
clf.fit(X, y)
estimatorAccuracy=[]
for curEstimator in range(n_estimators):
estimatorAccuracy.append([curEstimator,accuracy_score(y, clf.estimators_[curEstimator].predict(X))])
estimatorAccuracy=pd.DataFrame(estimatorAccuracy,columns=['estimatorNumber','Accuracy'])
estimatorAccuracy.sort_values(inplace=True,by='Accuracy',ascending=False)
bestDecisionTree= clf.estimators_[estimatorAccuracy.head(1)['estimatorNumber'].values[0]]