For questions related to the Average Precision metric.
Questions tagged [average-precision]
51 questions
43
votes
2 answers
Area under Precision-Recall Curve (AUC of PR-curve) and Average Precision (AP)
Is Average Precision (AP) the Area under Precision-Recall Curve (AUC of PR-curve) ?
EDIT:
here is some comment about difference in PR AUC and AP.
The AUC is obtained by trapezoidal interpolation of the precision. An
alternative and usually…

mrgloom
- 1,687
- 4
- 25
- 33
31
votes
1 answer
Mean Average Precision vs Mean Reciprocal Rank
I am trying to understand when it is appropriate to use the MAP and when MRR should be used. I found this presentation that states that MRR is best utilised when the number of relevant results is less than 5 and best when it is 1. In other cases MAP…

K G
- 411
- 1
- 4
- 4
13
votes
2 answers
How to compare two ranking algorithms?
I want to compare two ranking algorithms. In these algorithms, client specifies some conditions in his/her search. According to the client`s requirements, these algorithm should assign a score for each items in data base and retrieve items with…

M K
- 131
- 1
- 3
12
votes
4 answers
Average Precision in Object Detection
I'm quite confused as to how I can calculate the AP or mAP values as there seem to be quite a few different methods. I specifically want to get the AP/mAP values for object detection.
All I know for sure is:
Recall = TP/(TP + FN),
Precision = TP/(TP…

User1915
- 391
- 1
- 3
- 12
9
votes
0 answers
Baseline for Precision-Related Metrics
When working with ROC-AUC as a metric for binary classification, one often considers a value of 0.5 as a baseline from a random classifier (i.e. a data-blind classifier that randomly classifies test instances with equal probability).
I have read…

Amelio Vazquez-Reina
- 17,546
- 26
- 74
- 110
7
votes
0 answers
Averaged vs. Combined k-fold cross validation and leave-one-out
calculating recall/precision from k-fold cross validation (or leave-one-out) can be performed either by averaging the recall/precision values obtained from different k folds or by combining the predictions and then calculate one value for each of…

Abbas
- 485
- 1
- 4
- 12
7
votes
2 answers
What does it mean if the ROC AUC is high and the Average Precision is low?
I have a model that produces a high ROC AUC (0.90), but at the same time a low average precision (0.30). From what I've found, I think it might have to do something with imbalanced data (which the dataset is). However, I cannot see how this…

Icyeval
- 383
- 3
- 7
6
votes
2 answers
Is it better to compute Average Precision using the trapezoidal rule or the rectangle method?
Background
Average precision is a popular and important performance metric widely used for, e.g., retrieval and detection tasks. It measures the area under the precision-recall curve, which plots the precision values for all possible detection…

Callidior
- 63
- 4
4
votes
1 answer
"Mean average precision" (MAP) evaluation statistic - understanding good/bad/chance values
I'm evaluating a multilabel classifier. I'm familiar with the Area Under the Curve statistic, which has some nice properties (e.g. chance level is always 50%). But for some applications, it's more appropriate to use the "Mean average precision"…

Dan Stowell
- 1,262
- 1
- 12
- 22
4
votes
2 answers
Derive percentiles from binned data
The question below was asked on a sister site (Stack Overflow) back in 2010 by a user still active there (to me it seems more suitable here, for example quite similar to 21422):
I have a bunch of data in Excel that I need to get certain…

pnuts
- 157
- 1
- 7
4
votes
1 answer
Intuitive or quantitative explanation of why we care about mean average precision (mAP) for CNN classifiers?
Consider CNN classifiers applied to some image classification tasks: to fix ideas, let's consider the ImageNet Challenge, where each image belongs to 1 of 1000 nonoverlapping classes, even though the question is more general.
Usually, when people…

DeltaIV
- 15,894
- 4
- 62
- 104
4
votes
1 answer
Average precision when not all the relevant documents are found
I can't find on the Internet a proper source that explains this.
I have built a search engine that for a particular query retrieves 5 relevant document out of the 10 relevant documents.
When I calculate the average precision, I sum the Precision@k,…

ramborambo
- 205
- 1
- 9
3
votes
1 answer
Precision, Recall and area under ROC curve as sample size increases
The following is a question from an exam paper on evaluating the performance of search engines. To this day I looked in my text book and literally close to 50 web pages and I can't
find one convincing argument for any of the cases. Can anyone help…

user692734
- 131
- 1
3
votes
0 answers
Calculating the standard deviation of the mean of average rates of speed
Is it possible to determine the mean value of a point by averaging the average rate of ranges that contain that point, and if so, how can the uncertainty of that value be accurately determined?
I think my question can be best asked with a…

Nick Anderegg
- 81
- 6
2
votes
0 answers
Average of percentages / prove causal relationship between sale dates & margin sales
I just wanted to make sure I'm right here. I have a situation where I need to prove that on days where we sell more than 10 items, our margin (sale price vs. suggested price) goes up. I'm not really sure it's possible to prove this, but this is what…

Lorenzo
- 121
- 1