I am working on a 3-class classification problem. We are cross-validating via a Leave-One-Out Approach, and there are some instances where the test data has no instances of one of my three classes. For this reason, the corresponding row of my confusion matrix is a row of zeros.
I want a metric of how accurate my model is when predicting the other two classes. I would like to keep the third class in my model despite this occasional imbalance. However, the summary statistics that I have learned about in the past (Cohen's Kappa and the F1 score) fail when applied to these confusion matrices.
One way that I have tried to get around this problem is to delete the 2nd row of my confusion matrix. This would give a 2x3 confusion matrix which, while initially strange, does work within the formula of the F1 Score. I'm worried that this is not correct, however, and wanted to see what the community of StackExchange thinks. Thank you for your help!