1

I am wondering what is the difference in these error metrics (and what they are) in object detection: mAP_0.5:0.95, mAP_0.5, recall, precision

Peter
  • 11
  • 1

1 Answers1

1

The area under the precision-recall curve is called mean average precision (mAP). For calculating, mAP, we need Intersection Over Union (IoU) based on which we decide which predictions are correct e.g. prediction with IoU > 0.5 can be considered as True Positive prediction.

The problem with a single IoU threshold is that it means that two predictions of IoU 0.6 and 0.9 would have equal weightage. That's why, in evaluation metrics such as that of COCO, multiple IoU thresholds are used e.g. IoU ranges from 0.5 to 0.95 with a step size of 0.05 represented as AP@[.5:.05:.95].

Then, we take average of average precision we got using these IoU thresholds to get final mAP.

$mAP = \frac{mAP_{0.50} + mAP_{0.55} + ... + mAP_{0.95}}{10}$

You may need to read more in detail about mAP e.g. here. I also wrote a blog post some time ago.

kHarshit
  • 306
  • 4
  • 15