I was watching a presentation by an ML specialist from a major retailer, where they had developed a model to predict out of stock events.
Let's assume for a moment that over time, their model becomes very accurate, wouldn't that somehow be "self-defeating"? That is, if the model truly works well, then they will be able to anticipate out of stock events and avoid them, eventually getting to a point where they have little or no out of stock events at all. But then if that is the case, there won't be enough historical data to run their model on, or their model gets derailed, because the same causal factors that used to indicate a stock out event no longer do so.
What are the strategies for dealing with such a scenario?
Additionally, one could envision the opposite situation: For example a recommender system might become a "self-fulfilling prophecy" with an increase in sales of item pairs driven by the output of the recommender system, even if the two items aren't really that related.
It seems to me that both are results of a sort of feedback loop that occurs between the output of the predictor and the actions that are taken based on it. How can one deal with situations like this?