Rob J. Hyndman has two posts here and here on forecasting weekly series. He suggests using a regression with ARIMA errors,
$$ y_t = a + \sum_{k=1}^K (\alpha_k \ \text{sin}(2\pi k t/m)+\beta_k \ \text{cos}(2\pi k t/m)) + N_t $$
where $N_t$ is an ARIMA process, $m=365.25/7$ is the seasonal period and $K$ can be selected using AIC.
The model seems to imply a complicated form of seasonality (rather than additive seasonality -- which I would consider simple). On the other hand, if the seasonality were additive and constant (not changing over time) and there were no deterministic time trends, then a more reasonable approach would seem to be
Stage 1: estimate
$$ y_t = a + \sum_{k=1}^K (\alpha_k \ \text{sin}(2\pi k t/m)+\beta_k \ \text{cos}(2\pi k t/m)) + \varepsilon_t $$
where $\varepsilon_t$ is the error term (in Stage 1 we do not put any structure on it), then obtain fitted values $\tilde{y}_t$.
Stage 2: model $\tilde{y}_t$ as an ARIMA process.
Stage 2: model the residuals $\hat{\varepsilon}_t$ as an ARIMA process.
Stage 1 and Stage 2 could be done sequentially without loss of efficiency as the Fourier terms are deterministic and will be (asymptotically) uncorrelated with other regressors. (The argument works fine for linear regressions, but perhaps not in the context of ARIMA models -- I am not sure.)
Questions:
- Am I wrong at some point?
- When would Rob J. Hyndman's approach be preferred to the two-stage approach?
(My main interest here is actually seasonal adjustment rather than forecasting.)