Questions tagged [auc]

AUC stands for the Area Under the Curve and usually refers to the area under the receiver operator characteristic (ROC) curve.

AUC stands for the Area Under the Curve. Technically, it can be used for the area under any number of curves that are used to measure the performance of a model, for example, it could be used for the area under a precision-recall curve. However, when not otherwise specified, AUC is almost always taken to mean the area under the Receiver Operating Characteristic (ROC) curve. The acronym AUROC is sometimes used to indicate this AUC with greater precision.

The curves for which the AUC might be calculated are usually plotted within a unit square. Thus, the maximum AUC would be $1$. Unless the underlying model is badly misspecified, the minimum AUC is typically $.5$. These analytical bounds help make the AUC interpretable.

A good place to start reading about ROC AUC is Tom Fawcett, "An Introduction to ROC Analysis."

525 questions
275
votes
6 answers

What does AUC stand for and what is it?

Searched high and low and have not been able to find out what AUC, as in related to prediction, stands for or means.
josh
  • 3,119
  • 4
  • 12
  • 14
95
votes
5 answers

How to calculate Area Under the Curve (AUC), or the c-statistic, by hand

I am interested in calculating area under the curve (AUC), or the c-statistic, by hand for a binary logistic regression model. For example, in the validation dataset, I have the true value for the dependent variable, retention (1 = retained; 0 = not…
Matt Reichenbach
  • 3,404
  • 6
  • 25
  • 43
43
votes
2 answers

Area under Precision-Recall Curve (AUC of PR-curve) and Average Precision (AP)

Is Average Precision (AP) the Area under Precision-Recall Curve (AUC of PR-curve) ? EDIT: here is some comment about difference in PR AUC and AP. The AUC is obtained by trapezoidal interpolation of the precision. An alternative and usually…
mrgloom
  • 1,687
  • 4
  • 25
  • 33
35
votes
3 answers

Why is AUC higher for a classifier that is less accurate than for one that is more accurate?

I have two classifiers A: naive Bayesian network B: tree (singly-connected) Bayesian network In terms of accuracy and other measures, A performs comparatively worse than B. However, when I use the R packages ROCR and AUC to perform ROC analysis,…
Jane Wayne
  • 1,268
  • 2
  • 14
  • 24
30
votes
3 answers

What is the difference in what AIC and c-statistic (AUC) actually measure for model fit?

Akaike Information Criterion (AIC) and the c-statistic (area under ROC curve) are two measures of model fit for logistic regression. I am having trouble explaining what is going on when the results of the two measures are not consistent. I guess…
timbp
  • 1,067
  • 1
  • 11
  • 17
30
votes
3 answers

Can AUC-ROC be between 0-0.5?

Can AUC-ROC values be between 0-0.5? Does the model ever output values between 0 and 0.5?
Aman
  • 533
  • 1
  • 6
  • 10
29
votes
3 answers

ROC curve for discrete classifiers like SVM: Why do we still call it a "curve"?, Isn't it just a "point"?

In the discussion : how to generate a roc curve for binary classification, I think that the confusion was that a "binary classifier" (which is any classifier that separates 2 classes) was for Yang what is called a "discrete classifier" (which…
Abdelhak Mahmoudi
  • 291
  • 1
  • 3
  • 3
26
votes
3 answers

Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?

I've fitted my model and am trying to understand whether it's any good. I've calculated the recommended metrics to assess it ($R^2$/ AUC / accuracy / prediction error / etc) but do not know how to interpret them. In short, how do I tell if my model…
mkt
  • 11,770
  • 9
  • 51
  • 125
24
votes
2 answers

Did I just invent a Bayesian method for analysis of ROC curves?

Preamble This is a long post. If you're re-reading this, please note that I've revised the question portion, though the background material remains the same. Additionally, I believe that I've devised a solution to the problem. That solution appears…
Sycorax
  • 76,417
  • 20
  • 189
  • 313
24
votes
4 answers

How to derive the probabilistic interpretation of the AUC?

Why is the area under the ROC curve the probability that a classifier will rank a randomly chosen "positive" instance (from the retrieved predictions) higher than a randomly chosen "positive" one (from the original positive class)? How does one…
mff
  • 241
  • 1
  • 2
  • 3
22
votes
4 answers

What is the name of this chart showing false and true positive rates and how is it generated?

The image below shows a continuous curve of false positive rates vs. true positive rates: However, what I don't immediately get is how these rates are being calculated. If a method is applied to a dataset, it has a certain FP rate and a certain FN…
Axoren
  • 323
  • 2
  • 7
22
votes
1 answer

Why is ROC AUC equivalent to the probability that two randomly-selected samples are correctly ranked?

I found there are two ways to understand what AUC stands for but I couldn't get why these two interpretations are equivalent mathematically. In the first interpretation, AUC is the area under the ROC curve. Picking points from 0 to 1 as threshold…
Felicia.H
  • 323
  • 2
  • 5
21
votes
3 answers

Why AUC =1 even classifier has misclassified half of the samples?

I am using a classifier which returns probabilities. To calculate AUC, I am using pROC R-package. The output probabilities from classifier…
user4704857
  • 502
  • 3
  • 12
20
votes
3 answers

AUC and class imbalance in training/test dataset

I just start to learn the Area under the ROC curve (AUC). I am told that AUC is not reflected by data imbalance. I think it means that AUC is insensitive to imbalance in test data, rather than imbalance in training data. In other words, only…
Munichong
  • 1,645
  • 3
  • 15
  • 26
20
votes
3 answers

Is the Dice coefficient the same as accuracy?

I have come across the Dice coefficient for volume similarity and accuracy. It seems to me that these two measures are the same. Is that correct?
RockTheStar
  • 11,277
  • 31
  • 63
  • 89
1
2 3
34 35