I have implemented a classifier and I can calculate the precision and recall when testing the classifier (two classes) in the test set(>100). Is it meaningful to talk about the confidence interval for the precision(recall)? The precision(recall) can be defined as a random variable. I assume that my sample is the test set and there is a precision related to it with the defined classifier. How should I define the distribution for the precision for class 1 (#of correct classifications of c1 / #of total classifications of c1)? One alternative is to use the Binomial distribution for proportions.
Asked
Active
Viewed 3,645 times
2
-
1I would opt for the bootstrap to compute such confidence intervals. – Marc Claesen Oct 05 '14 at 10:22
-
Yes, certainly you can talk about the confidence interval for precision or recall. – user31264 Oct 05 '14 at 10:47