I am forecasting a weekly commodity price series. I use a rolling window for estimating my model, and from each window I make point forecasts for one, two and more steps ahead.
I want to investigate forecast optimality. Diebold (2017, p. 334, list item c) indicates that one of the desirable properties of a good forecast is
Optimal forecasts have $h$-step-ahead errors that are at most MA($h−1$).
I would like to test this for my forecasts, for a concrete $h\geq2$. How do I do that?
Here are some thoughts (comments on which will be appreciated):
I have thought of fitting an MA($h−1$) model to the forecast errors and testing the model's residuals for presence of nonzero autocorrelation up to several lags. But does it make sense to test for lags below $h-1$? I guess not, because an MA($h−1$) model is fit so as to (at least indirectly) minimize autocorrelations up to lag $h-1$, and they will be close to zero regardless of whether or not the model is adequate for the data.
However, I could check for presence of nonzero autocorrelations starting from lag $h$ and going above, e.g. jointly testing lags $(h, \dots, h+s)$ for some $s>0$. I think I could do this with the Breusch-Godfrey test -- if I figure out how to construct the auxiliary regression needed for obtaining the test statistic.
An alternative to the Breusch-Godfrey test would be the Ljung-Box test. However, I am not sure whether it is applicable to residuals of MA models (given what we know from "Testing for autocorrelation: Ljung-Box versus Breusch-Godfrey"); is it?
References
- Diebold "Forecasting in Economics, Business, Finance and Beyond" (version of 1st August 2017)