I would like to train a binary logistic regression model, but instead of the dependent variable being a class label (ie: 1 or 0), I would like it to be a probability distribution over the possible classes. For example, y could equal [1, 0], [0, 1], [0.5, 0.5], ... It does not seem like SKLearn provides functionality for this through their LogisticRegression class, are there any other Python libraries that would allow me to do this (I am trying to avoid implementing this from scratch)?
Asked
Active
Viewed 16 times
0
-
It seems like [it does exist](https://scikit-learn.org/stable/modules/calibration.html#calibrating-a-classifier) elsewhere in SKLearn: tools are provided to map the `LogisticRegression` output probability to a 'calibrated' probability. Still, (why) do you really want to do this? – Arya McCarthy Aug 16 '21 at 22:07