I will perform time-series prediction and I will report the accuracy of my system with a measure like RMSE or MAE.
However, the variables I will predict are in different ranges. So let's say one is in millions (~ 1e6
) whereas the other is a fraction, so ~ 1e-1
. So when I report the MAE on these two variables, one will be 10m times bigger than the other one, while my system's accuracy is similar on the two.
So what are good ways to obtain a comparable performance measure?
The ways I can think of is :
- Dividing the score with the mean of the data.
- Expressing each residual as a fraction of the true value, so if the true value is
x
, and my prediction isXp
, I can measure the error as|x-Xp|/x
Can you guide me into a meaningful way to solve this problem?