It is well agreed (for example this discussion ) that Logistic Regression guarantees that model will produce well calibrated in the large (mean) predictions.
Does Logistic Regression guarantees that calibration-in-the-small is good too, or not necessarily?
If calibration-in-the-small is not guaranteed to be good (or in case when it is not good), what is known about the reasons?
Are there known methods to improve calibration-in-the-small via some modified loss function (or some other ways) while training LR model, or this is only solved by using some calibration function/transformation that is applied on outputs from prediction?