Am working on a binary classification with imbalanced dataset of 75:25. Class 0 is only 25% (minority class).
My objective is to predict the 0's as 0's correctly. Maximize recall value/f1-score for class 0.
However, I realized that the scoring functions only focus on maximizing the metric for positive/majority class? Is it like that? I might be wrong too
For ex, the below code focuses on maximizing the f1-score of positive class (in my data)
model = GridSearchCV(rfc, param_grid, cv = skf, scoring='f1')
model.fit(ord_train_t, y_train)
But my objective is to maximize the f1-score
of minority class (negative class - Label 0) in my case. (more costly, important)
Therefore, the only option is invert my labels? Meaning, map 1s to 0s and 0s to 1s?
Isn't there any method available to focus on maximizing the metrics for minority class? Or my understanding is incorrect and metrics work equally same for both classes? there's no preference between majority and minority class (during binary classification metrics optimization)?
Is it wring from my part to code the labels incorrectly? The class that I want to predict should always be 1?