1

I have a multi class classification problem that requires me to predict the most likely winner from a batch. My idea was to use a binary classifier to learn in general the "winning characteristics" . Then for each member of the batch compute the probabilities that they are a winner, choose the most likely winner and output that as my label. However, I want to recalibrate my probabilites so that they sum to one. How do I do this?

mn569
  • 11
  • 1
  • Search for Platt scaling. See https://stats.stackexchange.com/questions/5196/why-use-platts-scaling – msuzen Aug 08 '21 at 18:33
  • Probabilities always have total measure 1, so it’s not clear how your model is getting a different result. Can you [edit] your post explain in more detail what your data is, what your mode is, and what problem you’re trying to solve? – Sycorax Aug 08 '21 at 19:13

0 Answers0