One of the origins of ROC curves seems to be to compare radar systems in WWII (source). How did they actually compute the False Positive Rate when they didn't have an estimate for True Negatives?
If I understand correctly, the FPR is FPR = FP/(FP+TN)
. But what would TN
be in this case? Every time the radar has correctly identified that there is no bomber over them? But that would require them to arbitrarily quantize the time span: E.g., "The system correctly predicts that today there was no bomber" or "The system correctly predicts that this hour there was no bomber".
In comparison, I think it would have been much more intuitive for them to use precision instead of FPR. So why did they go with ROC instead of PR curves?