I was searching myself how to interpret these indices.
Note that the best method remains to plot the predictions over the real data of the same period.
I found these information on various websites including Wikipedia, stackexchange / stackoverflow, statisticshowto and other web places:
-You may "Ecosia" some of these phrases to find their source.
ME: Mean Error -- The mean error is an informal term that usually
refers to the average of -- all the errors in a set. An “error” in
this context is an uncertainty in a measurement, -- or the
difference between the measured value and true/correct value
RMSE: Root Mean Squared Error
2.1 MAE: Mean Absolute Error -- The MAE
measures the average magnitude of the errors in a set of forecasts,
-- without considering their direction. It measures accuracy for continuous variables. -- The RMSE will always be larger or equal
to the MAE; -- the greater difference between them, the greater
the variance in the individual errors -- in the sample. If the
RMSE=MAE, then all the errors are of the same magnitude -- Both the
MAE and RMSE can range from 0 to ∞. -- They are
negatively-oriented scores: Lower values are better.
MPE: Mean Percentage Error -- the mean percentage error (MPE) is
the computed average of -- percentage errors by which forecasts of
a model differ from actual values of the -- quantity being
forecast.
MAPE: Mean Absolute Percentage Error -- The MAPE, as a percentage,
only makes sense for values where divisions and -- ratios make
sense. It doesn't make sense to calculate percentages of
temperatures -- MAPEs greater than 100% can occur. -- then this
may lead to negative accuracy, which people may have a hard time
understanding -- Error close to 0% => Increasing forecast accuracy
-- Around 2.2% MAPE implies the model is about 97.8% accurate in predicting the next 15 observations.
MASE: Mean Absolute Scaled Error -- Scale invariance: The mean
absolute scaled error is independent of the scale of the data, --
so can be used to compare forecasts across data sets with different
scales. -- ok for scales that do not have a meaningful 0, --
penalizes positive and negative forecast errors equally -- Values
greater than one indicate that in-sample one-step forecasts
from the naïve method perform better than the forecast values under consideration. -- When comparing forecasting methods, the
method with the lowest MASE is the preferred method.
ACF1: Autocorrelation of errors at lag 1.' -- it is a measure of
how much is the current value influenced by the previous values in a
time series. -- Specifically, the autocorrelation function tells
you the correlation between points separated by various time lags
-- the ACF tells you how correlated points are with each other, -- based on how many time steps they are separated by. That is the gist
of autocorrelation, -- it is how correlated past data points are to
future data points, for different values of the time separation. --
Typically, you'd expect the autocorrelation function -- to fall
towards 0 as points become more separated (i.e. n becomes large in
the above notation) -- because its generally harder to forecast
further into the future from a given set of data. -- This is not a
rule, but is typical. -- ACF(0)=1 (all data are perfectly
correlated with themselves), -- ACF(1)=.9 (the correlation between
a point and the next point is 0.9), ACF(2)=.4 -- (the correlation
between a point and a point two time steps ahead is 0.4)...etc.
MAPE (???), Correlation and Min-Max Error can be used an RMSE of
100 for a series whose mean is in 1000’s is better than an RMSE of 5
for series in 10’s. So, you can’t really use them to compare the
forecasts of two different scaled time series.
..........
All your indicators (ME, RMSE, MAE, MPE, MAPE, MASE, ACF1,...) are
aggregations of two types of errors : a bias (you have the wrong
model but an accurate fit) + a variance (you have the right model
but a inaccurate fit). And there is no statistical method to know
if you have a high bias and low variance or a high variance and low
bias. So I suggest, you make a plot and make an eye-stimate to
select the "best" one, best meaning with the least business
consequences if you are wrong.
Generally, all of these values to be as small as possible