Fitting 3 different models on a 5-class imbalanced dataset. The results show model accuracy always being equal to the recall. How can this be possible?
1. RF model results:
Test acc: 0.6285670349948376
Recall: 0.6285670349948376
Precision: 0.6171361174985392
f1_score: 0.5886671088640658
ROC AUC score: 0.7998931710957794
2. MLP model results:
Accuracy: 0.44232332330133345
Recall: 0.44232332330133345
f1_score: 0.4242650817694506
Precision: 0.4707025922895617
ROC AUC score: 0.6031862642540948
3. CNN model results:
Accuracy: 0.7411148092888021
Recall: 0.7411148092888021
f1_score: 0.741477630295568
Precision: 0.7972578281551425
ROC AUC score: 0.8291519390873785
Models' confusion matrices:
1. RF model
[[ 8753 87 494 5183 84]
[ 344 449 26 578 1]
[ 1429 33 1311 5504 40]
[ 1431 104 668 18072 26]
[ 350 0 11 515 28]]
2. MLP model:
[[11106 574 677 1698 546]
[ 904 172 106 180 36]
[ 4897 657 530 2133 100]
[ 7668 2448 1532 8301 352]
[ 490 36 33 319 26]]
3. CNN model:
[[6195 28 137 226 52]
[ 108 789 39 16 6]
[ 95 5 3113 376 10]
[2506 326 2398 8570 238]
[ 72 10 73 46 705]]
In all cases, accuracy=recall
! How can this be possible?
EDIT
Metrics calculation:
1. RF model:
pred_test = model.predict(x_test)
test_acc = sklearn.metrics.accuracy_score(y_test, pred_test)
f1 = sklearn.metrics.f1_score(y_test, pred_test, average='weighted')
recall = sklearn.metrics.recall_score(y_test, pred_test, average='weighted')
precision = sklearn.metrics.precision_score(y_test, pred_test, average='weighted')
pred_prob = model.predict_proba(x_test)
roc = roc_auc_score(y_test, pred_prob, average='weighted',
multi_class='ovr',labels=[0,1,2,3,4])
2. MLP
accuracy = sklearn.metrics.accuracy_score(y_test, y_pred)
f1 = sklearn.metrics.f1_score(y_test, y_pred, average='weighted')
recall = sklearn.metrics.recall_score(y_test, y_pred, average='weighted')
precision = sklearn.metrics.precision_score(y_test, y_pred, average='weighted')
3. CNN
Pred = model.predict(x_test, batch_size=32)
Pred_Label = np.argmax(Pred, axis=1)
labels=[0, 1, 2, 3, 4]
...
ConfusionM = confusion_matrix(list(y_test_ori), Pred_Label, labels=labels)
class_report = classification_report(list(y_test_ori), Pred_Label, labels=labels)
roc = roc_auc_score(y_test_ori, Pred, average='weighted',
multi_class='ovr',labels=labels)
print(f" ROC score: {roc}")