3

Say I have two time series which each follow the AR(1) model:

$$ X_{t+1} = X_t + (1 - \theta_X) (\mu_X - X_t) + \epsilon_X(t) $$ $$ Y_{t+1} = Y_t + (1 - \theta_Y) (\mu_Y - Y_t) + \epsilon_Y(t) $$

Here, $\theta_X$ and $\theta_Y$ are parameters with absolute value less than one, $\mu_X, \mu_Y$ are the "means" of the time series, and $\epsilon_X(t), \epsilon_Y(t)$ are normal with standard deviations $\sigma_X, \sigma_Y >0$. The parameters can be estimated from the data, I believe.

What would be a test similar to the $t$-test which would allow me to reject the null hypothesis that $X_{t}$ and $Y_{t}$ are sampled from distributions which have the same mean?

Richard Hardy
  • 54,375
  • 10
  • 95
  • 219
Open Season
  • 234
  • 1
  • 7
  • See my answer in a recent thread: [t-test for time series (Diebold Mariano test?)](https://stats.stackexchange.com/questions/434086/). I also think this questions might have been asked and answered before. (It is so fundamental that it would be surprising if no one had asked it before.) Have you tried looking for duplicates? You could start [here](https://stats.stackexchange.com/questions/tagged/t-test+time-series). – Richard Hardy Nov 02 '19 at 07:39

1 Answers1

1

A simple approach would be to use a $t$-test just like you would do for cross-sectional data, but substitute the vanilla estimate of standard deviation of a time series by an estimate that is robust to autocorrelation, e.g. Newey-West.

Richard Hardy
  • 54,375
  • 10
  • 95
  • 219