-1

I keep reading that second-order and strictly stationary time series has constant mean, variance and its autocovariance is time independent, but I can't find proof of that.

My definition of such time series would be that $(X_1, \dots, X_n) =^D (X_{1+h}, \dots, X_{n+h}) \forall n, h$ where $=^D$ means equality in distribution and $\mathbb{E}X_t^2 < \infty \forall t$.

Could you show or point me to one?


I think I managed to prove the constant mean: Since $(X_1, \dots, X_n) =^D (X_{1+h}, \dots, X_{n+h}) \forall n, h$ if we set $n=1$ we obtain $X_1 \sim X_{1+h}$ and that happens for every h so $X_1 \sim X_2 \sim \dots$ and so on. Since $EX_t^2 < \infty$ we have $EX_t < \infty$ so all r.v.s must have the same (constant) mean.

I got stuck at the variance and autocovariance though.

blahblah
  • 113
  • 6
  • 1
    That's because that is a definition. See, e.g., here for more https://stats.stackexchange.com/questions/65353/what-is-a-second-order-stationary-process – Christoph Hanck Mar 15 '21 at 11:10
  • Well, my definition of second-order (strict?) time series was such that $(X_1, \dots, X_n) = (X_{1+h}, \dots, X_{n+h})$ where $=$ means equality in distribution. – blahblah Mar 15 '21 at 11:15
  • 4
    There are notions of strict/ and weak stationarity but I have never hear "second-order strict". Where does it come from? – Richard Hardy Mar 15 '21 at 14:41
  • I might have not understood my lecturer correctly. I guess the task is to show that if we have a time series that has both finite second moments for all $t$ as well as the joint distribution is time independent then this time series is also weakly stationary. Does that make more sense now? – blahblah Mar 15 '21 at 15:48
  • What is $n{}{}$? – Dilip Sarwate Mar 15 '21 at 16:24
  • @DilipSarwate; it is arbitrary – blahblah Mar 15 '21 at 16:30
  • Then your understanding that the joint distribution of $n>2$ random variables is invariant to time translation is _not_ supported by the given second-order stationarity; you need $n$=th order stationarity for your ciaim. – Dilip Sarwate Mar 15 '21 at 17:30
  • To the OP; The notion of equality in distribution is totally seperate from the notion of stationarity. ( strict ot weak ). In the case where the elements of the time series have a joint normal distribution, then there is an equivalence between the two notions because a joint normal distribution happens to be identified once one knows its mean and covariance matrix. But I would not think about the two concepts as having any relation in general. – mlofton Mar 15 '21 at 17:35

1 Answers1

1

I might have not understood my lecturer correctly. I guess the task is to show that if we have a time series that has both finite second moments for all $t$ as well as the joint distribution is time independent then this time series is also weakly stationary

You can read some more about your revised question in this answer. If you are told that a time series is stationary to order $2$, then what you are being told is that for every choice of time instants $t$ and $s$, the joint distribution of random variables $X_{t}$ and $X_{t+h}$is the same as the joint distribution of $X_{s}$ and $X_{s+h}$. Put another way, the joint distribution of two random variables $h$ seconds apart is the same regardless of where on the time axis the two random variables happen to be (just so long as they are $h$ seconds apart). The time separation $h$ is the only thing that parametrizes the joint distribution of two random variables $h$ seconds apart on the time axis.

Since joint distributions uniquely define the marginal distributions, we easily deduce that $X_{t}$ and $X_{s}$ have the same distribution, and since $t$ and $s$ are arbitrarily chosen, we have that all random variables in the time series have the same distribution. More elephantinely, we might have remembered that stationarity to order $2$ implies stationarity to order $1$ and saved ourselves the formal deduction in the previous sentence. Furthermore, since new are told that the random variables have finite second moments (which implies that they have finite first moments too), we get that all the random variables have the same mean $\mu$ and the the same variance $\sigma^2$.

Finally, the claim that the autocovariance is "time-independent" needs to be stated and understood properly. By definition, the autocovariance function is \begin{align} C(t,s) &= E\big[\left(X_t - E[X_t]\right)\left(X_s - E[X_s]\right)\big]\\ &= E\big[X_tX_s\big] - E\big[X_t\big]E\big[X_s\big]\\ &= E\big[X_tX_s\big] - \mu_{X_t}\mu_{X_s}\\ &= E\big[X_tX_s\big] - \mu^2 &\text{for second-order stationary time series}. \end{align} Writing $t+h$ for $s$, we have that $$C(t,t+h)= E\big[X_tX_{t+h}\big] - \mu^2$$ where for a second-order stationary time series, $E\big[X_tX_{t+h}\big]$ is a function of $h$ alone and does not depend on $t$ at all. Thus, the autocovariance function does depend on a time parameter -- the time difference between the two time instants -- but the value of the autocovariance function does depend at all on the location of the two time instants on the time axis as long as they are $h$ seconds apart. In short, the "time-independence" is with respect to location; the autocovariance is very much a function of the time separation $h$.

A time series said to be weakly stationary if it has constant mean, finite second moment, and its autocorrelation function $R(t,s) = E\big[X_tX_s]$ depends only on $s-t$, the difference of the two time instants, and not upon the individual values of $t$ and $s$. I will leave it to you to determine if these conditions are satisfied.

Dilip Sarwate
  • 41,202
  • 4
  • 94
  • 200