Basically, I came across an article where the authors first ran a logistic regression on a data set to predict the probability (q=demand) of buying their product, as a function of price p and various other customer characteristics. Then they created a histogram, presented the so called "predicted demand buckets". That is, the observed nbr of clients within each range of predicted demand.
1.) How does these prediction buckets represent the segmentation power of the test?
Then they plotted the actual demand against predicted demand within each prediction bucket, and showed that the actual demand is within the confidence interval.
2.) How does this even make sense? To me, it feels like they use the data they regressed the model on, and then show that the model fits!