Questions tagged [mase]

The Mean Absolute Scaled Error (MASE) was proposed by Hyndman & Koehler (2006, *International Journal of Forecasting*) as a scale free accuracy measure for point forecasts. It is defined as the ratio of the MAE to the one-step MAE achieved *in-sample* (this is frequently gotten wrong) by a simple benchmark method (often the naive random walk forecast).

For historical data $y_1, \dots, y_T$ and forecasts $\hat{y}_{T+1}, \dots, \hat{y}_{T+H}$, the MASE is defined as

$$\text{MASE}:=\frac{\frac{1}{H}\sum_{h=1}^H|y_{T+h}-\hat{y}_{T+h}|}{Q}$$

The numerator is the Mean Absolute Error (MAE) of our forecasts while the denominator, $Q$, is a scaling factor equal to the MAE based on in-sample one-step benchmark forecasts. For non-seasonal data, the benchmark method is often the naive random walk so that

$$Q = \frac{1}{T-1}\sum_{t=2}^T|y_t-y_{t-1}|.$$

For seasonal data, the benchmark method is often the seasonal naive method, so that

$$Q = \frac{1}{T-m}\sum_{t=m+1}^T|y_t-y_{t-m}|$$

where $m$ is the period of seasonality.

An open-access non-technical introduction is provided by Hyndman and Athanasopoulos (2014, Section 2.5). Hyndman (2006, Foresight) is a longer, non-gated and non-technical introduction to the MASE in the context of intermittent demand forecasting. However, note that Kolassa (2016, International Journal of Forecasting) notes that the MASE is explicitly not suitable to assess forecasts for intermittent data.

Alternatives to the MASE as a point forecast accuracy measure include the , the and the .

38 questions
30
votes
2 answers

Interpretation of mean absolute scaled error (MASE)

Mean absolute scaled error (MASE) is a measure of forecast accuracy proposed by Koehler & Hyndman (2006). $$MASE=\frac{MAE}{MAE_{in-sample, \, naive}}$$ where $MAE$ is the mean absolute error produced by the actual forecast; while $MAE_{in-sample,…
Richard Hardy
  • 54,375
  • 10
  • 95
  • 219
9
votes
1 answer

How do I decide when to use MAPE, SMAPE and MASE for time series analysis on stock forecasting

My task is to forecast future 1 month stock required for retail store, at a daily basis. How do I decide whether MAPE, SMAPE and MASE is a good metrics for the scenario? In my context, over-forecast is better than under-forecast.
william007
  • 897
  • 1
  • 6
  • 9
8
votes
1 answer

Time series forecasting accuracy measures: MAPE and MASE

We come to this toy example showing MAPE and MASE are not consistent when measuring forecasting accuracy. Data consist of 100 white noise and 100 $AR(1)$ time series with length $N=500$, mean $\mu=1$ and standard deviation $\sigma=1$. #…
yanfei kang
  • 417
  • 1
  • 3
  • 14
6
votes
1 answer

Can I use mean absolute scaled error (MASE) from the accuracy function for time series cross validation?

I am using the "forecast" package in R to forecast time series data. I am programming some time series cross validation based off of reading resources from Rob J Hyndman. The last paragraph on page 7 in Hyndman "Measuring forecast accuracy" states:…
DataJack
  • 283
  • 3
  • 12
4
votes
1 answer

Model performance in time-series forecasting with some outliers

I'm creating forecasts for products where some of them have large seasonal spike during times like Christmas and/or Easter but relatively low sales volume on other times. For this particular product shown in the graph below, the most important part…
Viðar Ingason
  • 407
  • 2
  • 10
3
votes
1 answer

ARIMA: How to interpret MAPE?

I am using the forecast package in R to generate an ARIMA model for my data. I started with the auto.arima function for a try and got a ARIMA(1,1,2) model. ar1 ma1 ma2 0.7734 -1.0773 0.1191 s.e. 0.0709 0.0962 …
MikeHuber
  • 1,119
  • 3
  • 13
  • 23
2
votes
1 answer

Out of sample MASE

When calculating the MASE, the original paper suggests using the in-sample naive forecast error for scaling of the out of sample forecast error. When i use the the MAE generated by a naive forecast on the out of sample dataset however, I get a MASE…
Madzor
  • 23
  • 2
2
votes
0 answers

Why is MASE scaled by the mean absolute error produced by a naive forecast calculated on the in-sample data

Wouldn't a better scaling factor be with the MAE produced by a naive forecast on the test data itself? When evaluating MASE for the training set, this essentially becomes a comparison for the forecast model with a naive one, why do we not take this…
KRS-fun
  • 343
  • 1
  • 2
  • 7
2
votes
1 answer

How to interpret MASE for longer horizon forecasts?

After looking at Hyndman and Koehler, 2006 and applying the metric to my own data, I have been convinced that MASE is a better metric for evaluating forecast error than the method I had been previously using (MAPE), at least for short horizon…
Barker
  • 1,072
  • 9
  • 19
1
vote
1 answer

R : accuracy.gts, no MASE with monthly data

I have a problem similar to the one presented in this post : https://stackoverflow.com/questions/11092536/forecast-accuracy-no-mase-with-two-vectors-as-arguments even if it's maybe not related. I'm trying to make predictions of hierarchical time…
Alex
  • 11
  • 1
  • 4
1
vote
1 answer

Interpretation of scaled error measures

can someone give me an explanation on how one would interpret the result of a scaled error measure. For example the Mean Absolute Scaled Measure (MASE). The numerator is the mean absolute error and the denominator the mean absolute error of a…
ktl12
  • 31
  • 2
1
vote
2 answers

Is there any standard / criteria of good forecast measured by SMAPE and MASE?

I have built a forecasting model for a company. Since it is dedicated to practical usage, I prefer to use the relative error parameter (like MAPE, SMAPE, & MASE) as a measurement for my model performance and display it on the dashboard (the…
1
vote
2 answers

Understanding MASE Value

I've looked through many of the other posts concerning the Mean Absolute Scaled Error (MASE) forecast metric and haven't been able to sort out my problem just yet. I'm working with some weather model forecast data (hourly forecasts from 0 to the…
1
vote
1 answer

How can MASE (Mean Absolute Scaled Error) score value be interpreted for non time series data?

If I have used MASE to calculate non time-series data error (as described by Dr. Rob Hyndman here), how can I know if the score received is good or not? Since it is not a time-series, a random walk naive model is irrelevant here, and the threshold…
YonGU
  • 11
  • 1
  • 7
1
vote
1 answer

MASE and handling nan-values

I'd like to ask advice on how to correctly compute Mean Absolute Scaled Error (2006, Hyndman, Rob J., and Anne B. Koehler.) over the following example: y_hat = [1, 2, 3, 4, 0, 0, 0, 0, 9] y_true = [1, 2, 3, 4, np.nan, 5, 6, 7, 9] Should I delete…
Bear
  • 35
  • 5
1
2 3