I have 2 models (for simplicity, let's call them AR(1) and MA(1)) each making 1 day ahead forecasts
of time series.
If I had only 1 time series I would just use the Diebold-Mariano
test to compare the predictive abilities of the models.
But let's assume I have multiple time series. For each of them I have estiamted AR(1)
and MA(1)
models. With Diebold-Mariano test i could test if AR(1)
is better on series 1 and series 2 separately. I would like to test the performance of the models on aggregate though.
If the AR(1) is much better on series 1 compared to MA(1), and they are comparable on series 2 (or MA(1) could be even slightly better) I would like to test whether in total over both series AR(1) is better. Is that possible?
To make a more exact example: Assume I am forecasting stock returns for multiple companies. For given companies some models perform better than others. It could happen that from 10 companies model A
performs much better than model B
for 4 of them and worse for 2 of them. Can i find out if overall, across all my 10 companies, A
is better fit then B
?