1

Suppose you have a perfectly balanced data-set.

In which applications is accuracy a good metric?

Are there applications where it's preferable to precision, recall, and F1 (all at the same time)?

ldmat
  • 111
  • 2
  • None of accuracy, precision, recall, or F1 is generally a good measure of performance. See [this answer](https://stats.stackexchange.com/a/90705/28500) and many others on this site. You need to use a [proper scoring rule](https://stats.stackexchange.com/q/91088/28500) to get well-calibrated probability estimates, and then use your tradeoff between false-positives and false-negatives to choose the probability cutoff for classification if that's necessary. The choice of probability cutoff is implicitly made (typically at p = 0.5) when you use accuracy, precision, recall, or F1. – EdM Jan 01 '20 at 19:04
  • I don't understand why my question was locked and marked as duplicate, when in fact it's literally the complete opposite of the other "duplicate" question. The other person asked when it's not preferable, I asked when it is preferable. @EdM – ldmat Jan 03 '20 at 01:28
  • I think that the reason for closing this question as a duplicate is that the discussion on the specific thread noted in the closing notice and in the other threads linked in my comment make it pretty clear that accuracy is _never_ preferable for building a model, even in balanced data sets. Neither are precision, recall, and F1. Each of those tends to confound the probability-modeling process with (often hidden) tradeoffs between false-positives and false-negatives, which should be handled separately from modeling. None of those is preferable to metrics of strictly proper scoring rules. – EdM Jan 04 '20 at 04:30

0 Answers0