3

I'm trying to better understand the ROC when used for ML model classification and was looking at this explaining curve, explaining what is better and worse. However, I am thinking, contrary what is shown on the curve, that perhaps worse is really whatever is closer to the random distribution. Because whatever is not random can always be functionally inverted. I.e. A curve well under the diagonal would actually be better than the diagonal?

Is an off (below) diagonal ROC curve not always better than random?
(Please, note that none of the previous answers actually address this question.)

enter image description here


Related topics:

not2qubit
  • 133
  • 5
  • 3
    You want to take the opposite labels of a bad classifier...understandable. Let's draw an analogy. I predict if the weather will be rainy or sunny, and I'm inaccurate. I often predict sunny days as rainy and rainy days as sunny. You catch this, flip my predictions, and get good results. Am I suddenly good at predicting the weather? (Further, the model you get by flipping the categories of a poor classifier still suffers from issues related to probability predictions.) – Dave Oct 25 '20 at 18:12
  • Thanks Dave, but can you expand on the statement `"...flipping the categories of a poor classifier still suffers from issues related to probability predictions."`, in non-stat language? – not2qubit Oct 25 '20 at 19:13

0 Answers0