Evaluating the classifier I implemented for university, I am observing an AUROC (Area under curve of the ROC) of 1.0 (which means a TP rate of 1 and a FP rate of 0.0)
The dataset used for training were captured independently from the dataset used for evaluation.
Nevertheless I am hesitating to show up with this AUROC measure.
How should I interpret an AUROC value of 1.0 in respect to the general performance of the classifier? Is it overfitting if a different dataset (which is the real-world scenario) is used for testing? Does regularization makes sense?