1

I am estimating several ARIMA(p,1,q) for the logarithm of the realised volatility of the S&P 500, where d = 1 based on KPSS test, even though presence of an unit-root is rejected by the ADF test. As this is a hint of long memory, I am also estimating an Heterogeneous Autoregressive (HAR) model (described in this paper) for the logarithm of the realised volatility. An HAR model can be seen as a AR model with restrictions on certain lags.

I would like to compare the goodness-of-fit of the ARIMA(p,1,q) with the HAR model of the log-volatility. Since the ARIMA models are estimated for the first difference of the log-volatility while the HAR model is estimated for the log-volatility, I am assuming I cannot use any Information Criterion since the log-likelood functions are different.

I was thinking as an alternative goodness-of-fit measure 1) computing and comparing the standardised RSS for both models, 2) undifference (how?) the ARIMA residuals and computing its RSS or 3) difference the HAR residualS and compute its RSS.

A completely different alternative would be to estimate the HAR model for the first difference of the log-volatility - in this case I could compare the AIC of both models - but then I don't stick to the HAR model in the paper and I cannot give the same economic interpretations to its coefficients.

Any hint is appreciated. Thanks.

  • 1
    The straightforward way is to make the dependent variable the same across the alternative models, e.g. undo first-differencing for first-difference models and exponentiate for log-models. See ["Prerequisites for AIC model comparison"](http://stats.stackexchange.com/questions/48714/prerequisites-for-aic-model-comparison/100671#100671) (question and answers) and ["Comparing AIC of a model and its log-transformed version"](http://stats.stackexchange.com/questions/61332/comparing-aic-of-a-model-and-its-log-transformed-version) (question). – Richard Hardy Aug 27 '16 at 11:11

1 Answers1

1

To compare alternative models easily, you need to make the dependent variable (and the corresponding fitted values and residuals) the same across the models.

Suppose you care about $y_t$ but have modelled $\Delta y_t$ and $\log y_t$ for convenience. To compare the two models, first transform their fitted values and residuals to the original scale coresponding to $y_t$. Then compare the model fits based on residuals $\hat\varepsilon_{i,t} = y_t - \hat y_{i,t}$ (with different $\hat y_{i,t}$ for different alternative models indexed by $i$). You can use information criteria or other techniques.
(The distribution used in the likelihood function can become more complicated for residuals corresponding to $y_t$ than for the ones corresponding to transformed variables. E.g. if the errors of the model for $\log y_t$ were assumed ot be normally distributed, they will now have to be assumed distributed as an exponent of a normal random variable.)

Regarding how to undo first differencing, take a cumulative sum of the first original observation and the subsequent first differences:

$$ \begin{aligned} y_t &= y_1 + (y_2-y_1) + \dotsc + (y_t-y_{t-1}) \\ &= y_1 + \Delta y_2 + \dotsc + \Delta y_t \\ &= y_1 + \sum_{\tau=2}^t \Delta y_{\tau}. \\ \end{aligned} $$

This can be applied to fitted values as well, replacing $y_t$ with $\hat y_t$ and $\Delta y_{\tau}$ with $\widehat{\Delta y}_{\tau}$.
Note: you will have to cut the first value because it was not modelled and you have neither a fitted value nor a residual for it. When doing model comparisons, you will also have to cut the first value of $y_t$, $\hat y_{i,t}$ and $\hat\varepsilon_{i,t}$ for the other models as well to keep the dependent variable exactly the same across all the models.

Richard Hardy
  • 54,375
  • 10
  • 95
  • 219
  • Much clearer now, thank you. For models that have been estimated with different methods, e.g an ARIMA model with MLE and a feedforward neural network that minimises the MSE, does comparing them with an information criterion makes sense, or should I use _simpler_ goodness-of-fit metrics like the MSE for instance? Also, could you point me to books that talk about the theory related to these issues ? Thank you – DivineComedy Aug 28 '16 at 09:43
  • MSE ignores model complexity and therefore is not suitable for comparing models of different complexity *in sample*. However, (pseudo) *out of sample* you could compare models based on your loss function. If it is squared loss, MSE would work. Sorry, I don't know really good books on this topic. There is Konishi & Kitagawa "Information criteria and statistical modeling" (2008), but I have not read much of it, so I cannot really tell whether I find it useful. – Richard Hardy Aug 28 '16 at 09:54