We have developed a Predictive Model to show certain elements increasing and decreasing in certain locations of country.
What is acceptable in a prediction model accuracy?
We have developed a Predictive Model to show certain elements increasing and decreasing in certain locations of country.
What is acceptable in a prediction model accuracy?
This totally depends on your use case. Here are some of the ways to look at this:
That is totally case-dependent. Say I have a problem where have a 90% incidence (90% of my data would take value 1). Then, if my model is right 85% of the time, then I'd be better calling 1 every time, because that would give me a 90% accuracy.
Now, if I have a phenomenon that occurs only 1% of the time, then if my model is right 10% of the time, that could be a nice result.
In the end, you probably want to find a benchmark. The minimum benchmark would be that your model has to perform better than chance. Then, you want to be better than what others have done. In my work I usually try to improve over older models of my clients, so that's my benchmark. There's no golden rule of thumb so you have to use your criterion.
There is no line in the sand. You have to define what is acceptable from a business stand point.
If your model predicts at 70% accuracy, but guessing at random chance leads to 50% accuracy, does it make sense to use the model?
Maybe the model has good predictive accuracy, but the cost of making a wrong prediction far outweighs the cost of making a dozen right predictions. Does it make sense to use the model?
Only you can answer these questions because they depend on your context.