Based on his textbook for a non-seasonal time series, a useful way to define a scaled error uses naïve forecasts:
$$
q_{j} = \frac{\displaystyle e_{j}}{\displaystyle\frac{1}{T-1}\sum_{i=1}^T |y_t-y_{t-1}|}
$$
Acoording to his answer here, it is possible to scale the data using the mean as the base forecast. If $e_j$ denotes a prediction error on the test data, then the scaled errors $q_j$ are:
$$
q_{j} = \frac{\displaystyle e_{j}}{\displaystyle\frac{1}{N}\sum_{i=1}^N |Y_i-\bar{Y}|}
$$
where $y_1,\dots,y_N$ denotes the training data.
Note here that $Y_{t-1}$ is replaced with $\bar{Y}$
The MASE compares "your" forecast against a naive benchmark forecast calculated in-sample due to the MASE denominator is calculated in-sample, not in the holdout sample. The MASE is a metric for comparing errors to a user-defined baseline:
$$
MASE= mean(|q_{j}|)
$$
For interpretation of MSE which is not an easy job please have a look at this earlier thread on interpreting the MASE might be helpful from @Stephan Kolassa
My recommendation:
- Check this answer to not confused about the calculation of MSAE and MAE
- if you have a single series, then the MASE is less informative than the MAE, since the MASE is simply the MAE scaled by a factor that does not depend on the forecast. The MASE makes sense once you have multiple series on different levels, where you can't very well compare "raw" MAEs.
Note: the scaling statistic $Q$:
$$
\text{MASE}=\text{MAE}/Q
$$