9

When working with ROC-AUC as a metric for binary classification, one often considers a value of 0.5 as a baseline from a random classifier (i.e. a data-blind classifier that randomly classifies test instances with equal probability).

I have read that average precision (or more generally, mean average precision) may be a better metric when the positive class is of higher interest than the negative class. This claim deserves its own question, but aside from that, what is a reasonable random baseline for (baseline) average precision?

I am inclined to think that such random baseline should be P/(P+N) (i.e. fraction of positives of the total). How should I go about stablishing this baseline?

Amelio Vazquez-Reina
  • 17,546
  • 26
  • 74
  • 110
  • Does this answer your question? [What is "baseline" in precision recall curve](https://stats.stackexchange.com/questions/251175/what-is-baseline-in-precision-recall-curve) – usεr11852 Feb 08 '20 at 01:36

0 Answers0