0

Regression models (at least up to GLMs) do not traditionally require stationarity (although the requirement for the residuals is even stronger than stationarity).

ARMA-style time series models seem to always require stationarity. What other models always require stationarity?

Vague area: Using a linear regression y ~ x when both x and y are time series. In special cases, the residuals could be i.i.d. etc even though x and y are highly nonstationary. Of course, when both variables have a clear time trend, time becomes a confounding variable, a problem that matters only if you are interested in interpreting coefficients instead of simply making predictions.

Motivating example: Distributed Lag Nonlinear Models (see the dlnm package in R) use GLMs to estimate the effect of one time series on another time series . I'm having a hard time deciding whether it ever makes sense to use this tool on nonstationary series.

zkurtz
  • 2,052
  • 16
  • 31
  • I do not see how the concept of stationarity (as a [property of a stochastic process with infinite support](http://en.wikipedia.org/wiki/Stationary_process)) even applies to most regression models. What do you take stationarity to be, exactly? – whuber Aug 01 '14 at 20:13
  • The definition of stationarity on wikipedia, as you cited, is the one I had in mind. – zkurtz Aug 01 '14 at 20:26
  • You are right, regression models don't normally even mention stationarity. – zkurtz Aug 01 '14 at 20:27
  • 1
    Why mix regression and stationarity? Well, if you're regressing one time series on another with the hope of identifying something like a causative effect, in some cases you want to first remove the time trend first so that time does not confound the association. Removing the time trend can amount to making the series stationary in some cases, and this can be done either within the regression or as a preprocessing step. – zkurtz Aug 01 '14 at 20:34
  • 1
    @whuber I find myself quite often concerned about stationarity. I work with a lot of economic and financial time series data. If you're not accounting for stationarity, then you're going to have problems. To the OP, I'm more familiar with traditional time series models (ARMA, VAR, ECM), but generally if your forecast become ridiculous more than a few periods out, you can usually trace it back to a stationarity problem. – John Aug 01 '14 at 20:54
  • @zkurtz can you amplify what you mean by "require stationarity?" I have used regression to conduct time series analyses, and while I can conduct such regressions in ways that more or less attend to the stationarity or integratedness of the DV, and or residuals; if I *do* ignore integratedness or near integratedness of my data, then inferences based on my regression results are likely invalid, but I don't say that "regression requires stationarity." – Alexis Aug 01 '14 at 23:26
  • 1
    (a) [All models are wrong but some are useful](http://stats.stackexchange.com/questions/57407/what-is-the-meaning-of-all-models-are-wrong-but-some-are-useful), and (b) the requirements of a model depend on its intended use; both (a) and (b) contribute to the murkiness of my question. I guess I'm looking for insights that help to see through that fog. – zkurtz Aug 02 '14 at 02:18
  • @Alexis: how about, a model "requires stationarity" of one of its variables whenever the validity of some basic inferential statement based on the model depends on the stationarity of that variable. – zkurtz Aug 02 '14 at 02:18
  • How about: failing to account for integrated(/near-integrated/unit root/near unit root/non-stationarity) of one's data and residuals in a regression context threatens the validity of one's inferences based on the regression. – Alexis Aug 02 '14 at 04:06

0 Answers0