I want to compare a given classification algorithm with others via the Area under the (ROC-)curve metric. Unfortunately this algorithm only outputs the values of the respective confusion matrix (TP, FP, TN, FN) and a subset of the predicted positives, but no probability score for any of its predictions.
Confusion Matrix and Statistics
Reference
Prediction TRUE FALSE
TRUE 11 10
FALSE 3 475
Accuracy : 0.9739
95% CI : (0.9559, 0.9861)
No Information Rate : 0.9719
P-Value [Acc > NIR] : 0.46294
Kappa : 0.6156
Mcnemar's Test P-Value : 0.09609
Sensitivity : 0.78571
Specificity : 0.97938
Pos Pred Value : 0.52381
Neg Pred Value : 0.99372
Prevalence : 0.02806
Detection Rate : 0.02204
Detection Prevalence : 0.04208
Balanced Accuracy : 0.88255
When I tried to understand the ROC with examples like this or this, it always requires the prediction score to calculate the AUC and draw the curve. Wikipedia hints, that I should use a probability density function, but I don't know which and how. So, is it even possible to calculate the score and if yes, how?
Thank you guys in advance for your replies.