I know that log-loss penalises models that are confident with the wrong predicted classes. Can this be translated to percentage accuracy? If not, then how do I report the error or compare it to other percentage error metrics?
For example, on training a neural network with 128 output layers with sigmoid activation, I get a loss reduction from 0.30
to 0.04
over 20 epochs. How do I evaluate classifier accuracy based on this? It is a multi-label classification problem.