I am working on a linear classifier with expected output to be 1 for class A belonging and 0 for class B belonging.
The output, in some occasions is
- nearly 0 (
0.000198752053929624
), or - nearly 1 (
0.999740100963010
).
I've decided to round the numbers, and accuracy is 100%.
My question is: is this an acceptable procedure or there's some underlying problem that results in these outputs instead of clear 0s and 1s?
This problem happens for learning rates smaller than 0.1 on a gradient descent.