2

I have a question about MA(2) model $X_t = e_t + 0.5e_{t−1} + 0.4e_{t−2}$ with $e_t \sim IID(0, σ^2_e $).

I know by construction, MA process is (weakly) stationary, but can it be strictly stationary given IID?

To show it is strictly stationary, I get the $E(X_t)$ and $E(X_{t-5})$ and $V(X_t)$ and $V(X_{t-5})$ and found that both are the same ($\text{mean}=0$, and $\text{var}=1.41\sigma^2$).

So with that, is the process time-invariant? However, I know that the variance is finite so it implies WSS. I do not get a grab on this concept. I do confuse with IID that assumed the process to be strictly stationary.

whuber
  • 281,159
  • 54
  • 637
  • 1,101
Yana
  • 21
  • 1
  • Hi: stricly stationary means that if you look at observations at time t and calculate the joint distribution of say $n$ data points at that time and then look at the data at time $t+h$ and calculate the joint distribution of $n$ data points at that time, then they are the same. So, I think you are doing that sort of but you should check three data points say $x_1$, $x_2$, $x_3$ versus say $x_5$, $x_6$, $x_7$ since you have an MA(2). I say two because the MA(2) process is not correlated after lag 2 so there's really nothing to check except 3 data points. – mlofton Apr 07 '21 at 15:25

1 Answers1

3

There's a deeper and far more general result lurking here: a moving window operation on any strictly stationary process produces a strictly stationary process.

The demonstration is just a matter of applying definitions. I'll take you through it step by step, emphasizing the simplicity and--I hope--the obviousness of each one.

Let's begin with this "initial statement" of a trivial proposition. It will be applied repeatedly below.

Let the $p$-variate random variables $\mathbf X = (X_1,\ldots, X_p)$ and $\mathbf Y = (Y_1,\ldots, Y_p)$ have the same distribution and let $f:\mathbb{R}^p\to \mathbb R^q$ be any (measurable) function. Then $f(\mathbf X)$ and $f(\mathbf Y)$ are identically distributed.

Proof: the distributions of the results of applying $f$ depend only the the distributions of its arguments, which are identical, QED. Note that the resulting identical distributions are $q$-variate and we're talking about the full joint distribution, not just the marginal distributions of their components.

Suppose, then, that $\mathbf X = (\ldots, X_{-1}, X_0, X_1, \ldots)$ is a time series process. As a matter of notation, whenever $T$ is a finite sequence $(t_1,\ldots, t_p)$ of integers, define

  • $\mathbf{X}_T = (X_{t_1}, X_{t_2}, \ldots, X_{t_p})$ to be the sequence of components of the process picked out by the indexes in $T$ and

  • For any integer $h,$ let $T + h = (t_1+h, t_2+h, \ldots, t_p+h)$ be the translate of the indexes in $T.$

A process is strictly stationary when, for any finite sequence $T$ and any integer $h,$ $\mathbf{X}_T$ and $\mathbf{X}_{T+h}$ have the same distribution.

This is the definition. In words, it says all finite-dimensional marginal distributions of the process (namely, all $\mathbf{X}_T$) are time-invariant (namely, they do not change when translated by any lag $h$).

Take any $f$ as in the initial statement. For any finite sequence $T$ and integer $h,$ it follows that $f(\mathbf{X}_T)$ and $f(\mathbf{X}_{T+h})$ are identically distributed.

Let's apply these trivia to the situation in the question. Let $g:\mathbb{R}^r\to\mathbb{R}$ be a (measurable) function. Define a new time series process from the process $\mathbf X$ via

$$Y_t = g(X_{t-r+1}, X_{t-r+2}, \ldots, X_t).$$

This is the "moving window" application of $g$ to windows of length $r.$ We need to convince ourselves that when $\mathbf{X}$ is strictly stationary, so is $\mathbf Y.$

To show this, we must consider an arbitrary finite index set $S=(s_1,s_2,\ldots, s_q)$ and arbitrary integer $h$ and compare the distributions of $\mathbf{Y}_S$ and $\mathbf{Y}_{S+h}.$ Notice that, by construction,

$$\mathbf{Y}_S = (Y_{s_1}, Y_{s_2}, \ldots, Y_{s_q}) = (g(X_{s_1-r+1}, \ldots, X_{s_1}), \ldots, g(X_{s_q-r+1}, \ldots, X_{s_q})).$$

The right hand side can be expressed as the image of a function $f:\mathbb{R}^{rq}\to \mathbb{R}^{q}$ defined by

$$f(x_1, x_2, \ldots, x_{rq}) = (g(x_1,\ldots, x_q), g(x_{q+1}, \ldots, x_{2q}), \ldots, g(x_{(r-1)q+1}, \ldots, x_{rq})).$$

The initial statement (with $p=rq$) applies because this $f$ is (obviously) measurable. The relevant index set is

$$T = (s_1-r+1, s_1-r+2, \ldots s_1,\quad s_2-r+1,s_2-r+2,\ldots, s_2,\quad \ldots\\\quad s_q-r+1, s_q-r+2, \ldots, s_q).$$

(Many of these indexes might be the same. I never insisted the indexes must be unique!)

Because $\mathbf X$ is strictly stationary, $\mathbf{X}_T$ and $\mathbf{X}_{T+h}$ have the same $rq$-variate distributions. But--applying the initial statement once more--this means $\mathbf{Y}_S = f(\mathbf{X}_T)$ and $\mathbf{Y}_{S+h}= f(\mathbf{X}_{T+h})$ have the same $q$-variate distributions, proving $\mathbf{Y}$ is strictly stationary.

The question itself concerns the window function $g:\mathbb{R}^3\to\mathbb{R}^1,$ $g(x_1,x_2,x_3) = 0.4x_1 + 0.5x_1 + x_3,$ as applied to an IID sequence of variables $(e_t),$ which is clearly strictly stationary. Because this (linear) map $g$ is measurable, we're done.

whuber
  • 281,159
  • 54
  • 637
  • 1,101