0

I'm testing several classifiers in Weka Experimenter. Some of them have — at the same time — low accuracy (Percent_correct statistic) and high AUC. How should the quality of such classifiers be interpreted? Should they be considedered bad (for their low accuracy) or good (for their high AUC)? Under which circumstances one or the other of these performance measures should prevail in judging quality?

Note: Both questions mentioned in the comments below add useful insights. However, I would also like to know when you want to strieve for accuracy (possibly at the cost of worse AUC) and when you want to strieve for better AUC (possibly at the cost of worse accuracy).

Franco
  • 393
  • 3
  • 9
  • 1
    Have you seen https://stats.stackexchange.com/questions/346830/ or https://stats.stackexchange.com/questions/200815/? Do those answer your question? – jld Jun 06 '18 at 17:46
  • 1
    I see you've edited your question to specify that you'd like to know when to prefer accuracy. We also have a thread about that: https://stats.stackexchange.com/questions/297653/when-is-accuracy-score-preferred-to-aucroc – Sycorax Jun 06 '18 at 19:10
  • And see also https://stats.stackexchange.com/questions/90659/why-is-auc-higher-for-a-classifier-that-is-less-accurate-than-for-one-that-is-mo?rq=1 – Sycorax Jun 06 '18 at 19:11
  • 1
    And https://stats.stackexchange.com/questions/312780/why-is-accuracy-not-the-best-measure-for-assessing-classification-models/312787#312787 – Sycorax Jun 06 '18 at 19:12
  • 2
    If this question remains open, please please replace "significantly low accuracy" and "significantly high AUC" with less vague terms. It matters how low or how high. – rolando2 Jun 06 '18 at 19:29

0 Answers0