I have a multiple linear model that works on different datasets. suppose that the first dataset produces y
in range of [1,100] and the second one in range of [1, 1000].
I can't simply compare the MAE
for the two datasets. If MAE
for the first one is 2
and for the second one is 20
, I'd say the model is consistent, but I could not find a scientific way to show this:
There is no such thing as a Normalised MAE
. I can consider NRMSE
using RMSE / (ymax - ymin)
, but I was wondering if there are any better ways to compare the effectiveness of the same model on different datasets?
I am also aware of MAPE
and MASE
. Just wondering what is the best practice in reporting a scale-independent forecast error metric.
I am interested in the theory: which one of these work for my case: NRMSE
, MAPE
or MASE
?
I'm also using Python.