If a ML model is trained to predict conversions of a lead, and the company introduces a set of behavior based on those predictions, then a feedback loop can be created. If I only call the top 10% of my leads and don't call the bottom 90%, then over time my top 10% will have a much higher conversion rate even if my model was garbage. If I want to train a new model, it will pick up my old model's behavior because my actions towards those leads changed based on the old model. How would I train a new model without that bias?
Asked
Active
Viewed 128 times
0
-
Search for "rejection bias" and "reject inference"; see https://stats.stackexchange.com/q/13533/232706, https://stats.stackexchange.com/q/415616/232706 – Ben Reiniger Feb 04 '20 at 21:15
-
@BenReiniger thanks! Now, I know the proper terminology at least. – root Feb 04 '20 at 22:48