1

I have the question for my prediction model, which estimates used car prices. For example:

  1. Car: 20.000 km, real price: 32.000 Euro, prediction: 27.000 Euro --> MAPE: 15,6% , absolute difference : 5000 Euro

  2. Car: 300.000 km, real price: 5.000 Euro, prediction: 3.000 Euro --> MAPE: 40,0% , absolute difference: 2.000 Euro

You see, everyone would say ok the second prediction is better but if you use MAPE, it says, the first prediction is better. So my question is, if there is a good measurement for measuring the prediction accuracy.

Stephan Kolassa
  • 95,027
  • 13
  • 197
  • 357
Chris Ku
  • 11
  • 1
  • 2
  • 1
    http://www.robjhyndman.com/papers/forecast-accuracy.pdf – Matt Weller Dec 09 '15 at 08:44
  • Note that you have a sample size of 2. You calculate the APE per observation and then take the mean of the 2. Then you can say your model has a MAPE of 27.8%. But you need more observations than 2 to calculate a reliable mean. Often the median is used instead of the mean if the distribution is skewed (i.e. a few extreme values can significantly affect the mean). Then compare MAPE of 2 models to select one. This is well worth a read and is less involved than the previous link: https://www.otexts.org/fpp/2/5 – Matt Weller Dec 09 '15 at 08:58

2 Answers2

7

There are many, many, many ways of assessing forecast or prediction accuracy. This chapter in a very much recommended free online forecasting textbook gives a few of them.

Unfortunately, as you have found, different accuracy measures can give different answers to the question which forecast (or forecasting method) is better. Here, the first prediction has a lower Absolute Percentage Error (APE), but a higher Absolute Error (AE)... and the other way around for the second prediction. Scientific publications in forecasting will usually use multiple accuracy measures and hope that a consistent picture emerges.

Unfortunately, there is no "right" accuracy measure. You will need to think about what you actually want to do with your forecast. Which decision will you base on the forecast? Why are you forecasting the price of used cars? What are the consequences of a wrong forecast?

  • Do the consequences depend on the absolute error of the forecast? If so, use MAE.
  • Or do they depend on the percentage of the error? If so, use MAPE.
  • Do you need to get an interval or quantile forecast so you have enough "safety stock" in cash? If so, assess interval coverage.

However, note in particular that MAPE has a couple of serious problems. It is bounded for underforecasts, because you (probably) will never forecast below zero, so an underforecast cannot have a worse APE than 100%. But there is in principle nothing to keep you from forecasting too high - the car you predicted to cost 27,000 EUR could end up only costing 3,000 EUR, yielding an 800% error. Thus, the MAPE incentivizes you to forecast too low, that is, to bias your forecast. This effect is stronger if your actuals have a larger spread - as they seem to do in your case. Here is a simple illustration of this effect (full disclosure: I wrote that article.).

Bottom line: think about what you want to do with your forecast. Think carefully about whether your accuracy measure incentivizes you to get a "good" forecast. Consider looking at multiple accuracy measures.

Stephan Kolassa
  • 95,027
  • 13
  • 197
  • 357
0

MAPE is Mean Absolute Percent Error.

It makes sense to compare two models on their MAPE values. Since the errors are averaged out and it gives a measure of the overall model performance.

Ideally you should be comparing two models on the MAPE value and not two predictions.

When it comes to individual predictions higher the percentage error worse the prediction.

eg:

Case 1: Real - $40,000 Pred - $39,000  Dev- $1000 PE - 2.5%
Case 2: Real - $2,000  Pred - $1,000   Dev- $1000 PE - 50%

The absolute deviations are the same ($1000).

It makes a bigger business impact if your predictions are off by $1000 in the second case than in the first.

astrosyam
  • 101
  • 2