3

Let $X_t = \phi X_{t-1} + \epsilon_t$, $Y_t = X_t+\eta_t$

where $\{X_t\}$ is unobserved, $\{Y_t\}$ is observed and $\{\epsilon_t\}$ and $\{\eta_t\}$ are white noise sequences, then $X_t$ is an AR(1), we can write

$\zeta_t = Y_t - \phi Y_{t-1} = X_t+\eta_t - \phi (X_{t-1}+\eta_{t-1}) = \epsilon_t + \eta_t - \phi\eta_{t-1}$

It then says $\zeta_t$ is stationary and cov$(\zeta_t,\zeta_{t+k})=0$ for $k\geq 2$. Up to this point, I am following. It then says:

$$\zeta_t \text{ can be modelled as MA(1) process}$$

I do not follow this part the conclusion. Specifically, if $\epsilon_t$ is not present, that is a MA(1) but it appears to me that $\epsilon_t$ destroys this property.

See the bottom of page 5 of http://www.statslab.cam.ac.uk/~rrw1/timeseries/t.pdf

(or page 9 of the pdf doc)

Matthew Gunn
  • 20,541
  • 1
  • 47
  • 85
Lost1
  • 584
  • 4
  • 18

2 Answers2

3

It may seem that the $\epsilon_t$ term makes it so that $\{\zeta_t\}$ can't be written as an MA(1), but that is not the case!

Your Prof is correct. You can write process $\{\zeta_t\}$ as an MA(1)

Let $b = -\phi$ and consider two representations of the process $\{\zeta_t\}$. Structurally, the process is: \begin{align*} \zeta_t &= \epsilon_t + \eta_t + b \eta_{t-1}\\ \end{align*} The process $\{\zeta_t\}$ is stationary with auto-covariance function $\gamma(k) = 0$ for $k\geq 2$. Consequently, by the Wold Decomposition Theorem, the unique representation below also exists: $$ \zeta_t = u_t + a u_{t-1} $$

where $\{u_t\}$ is a white noise process.

Note autocovariance function $\gamma(k)$ is given by: \begin{align*} \gamma(0) &= \sigma^2_\epsilon + (1 +b^2) \sigma^2_\eta = (1 + a^2) \sigma^2_u\\ \gamma(1) &= b \sigma^2_\eta = a \sigma^2_u \\ \gamma(k) &= 0 \quad \text{for }k \geq 2 \end{align*}

Computing the Wold Decomposition:

Let $L$ denote the lag operator. Using the lag operator to write $\zeta_t$ we have $\zeta_t = \left( 1 + a L \right) u_t$

If $|a| < 1$ then $\left( 1 + a L \right)^{-1}$ exists and equating the two representations of $\zeta_t$ we can write: \begin{align*} u_t &= \left( 1 + aL\right)^{-1} \left( \epsilon_t + \eta_t + b \eta_{t-1} \right)\\ &= \left(\sum_{j=0}^\infty(-aL)^j \right)\left( \epsilon_t + \eta_t + b \eta_{t-1}\right)\\ &= \left[ \epsilon_t + \eta_t + b \eta_{t-1}\right] -a\left[\epsilon_{t-1} + \eta_{t-1} + b \eta_{t-2} \right] + a^2\left[ \epsilon_{t-2} + \eta_{t-2} +b \eta_{t-3} \right] + \ldots \end{align*}

Furthermore, you can show that $a$ is the solution to the quadratic equation in $a$:

$$ \frac{1}{a} + a = \frac{1}{b} \left( \frac{\sigma^2_{\epsilon}}{\sigma^2_\eta} + 1 \right) + b$$

One way to obtain the above equation is by equating the autocovariance function based upon the two representations of process $\{\zeta_t\}$ and solving for the root where $|a|< 1$.

This might be a bit imprecise, and there might be additional regularity conditions. (Also above I used that for $|a| < 1$ we have $(1 + a)^{-1} = 1 - a + a^2 - a^3 + a^4 - a^5 \ldots $.)

Some takeaways:

  • Process $\zeta_t = \epsilon_t + \eta_t - \phi \eta_{t-1}$ can be written as an MA(1) process $\zeta_t = u_t + a u_{t-1}$ where $a$ and $u_t$ are related to $\phi$, $\epsilon_t$, and $\eta_t$ by the above formulas.

  • The same time-series can be written in multiple ways. (Note the Wold representation is unique though.)

    • For example, an AR(1) can be written as an MA($\infty$).
  • There may be a somewhat nuanced relation between structural shocks (eg. $\epsilon_t$ and $\eta_t$ here) and shocks that you might recover in reduced form estimation (in this case, $u_t$).

MATLAB simulation to illustrate principle:

T = 1000000;
s2e = 2;
s2eta = 5;
e = sqrt(s2e) * randn(T, 1);
eta = sqrt(s2eta) * randn(T, 1);
phi = -.4;
zeta = e + eta + phi * lagmatrix(eta, 1);
m = estimate(arima(0,0,1), zeta)
a_est = m.MA{1};
disp('both sides of the below vectors should be roughly equivalent')
[1 / a_est + a_est, (1/phi) * ( 1 + s2e / s2eta) + phi]
[m.Variance, (phi / a_est) * s2eta]
beta = 1/phi * ( s2e / s2eta + 1) + phi;
a = (beta + sqrt(beta^2 - 4)) / 2; %solve quadratic formula
u = zeros(T, 1);
for i=20:T
    for j=0:18
        u(i) = u(i) + (-a)^j * (zeta(i-j));
    end
end
zeta2 = u + a * lagmatrix(u,1);

The two representations zeta and zeta2 are equivalent. And estimating an MA(1) model using zeta recovers a_est which is the same coefficient a as I calculate.

Matthew Gunn
  • 20,541
  • 1
  • 47
  • 85
  • Why does the Wold decomposition tell us that we can write $\zeta_t = u_t + a u_{t-1}$? Why only two terms? – Taylor Jan 25 '17 at 16:40
  • @Taylor We know the auto-covariance function of $\zeta_t$ is 0 for lags greater than 2 ($\gamma(k)=0$ for $k\geq2$), and if you have more non-zero coefficients in the Wold representation than $a$, you're going to get non-zero auto-covariance for some higher lags $k>2$. Checkout page 8 of [this pdf](https://faculty.washington.edu/ezivot/econ584/notes/timeSeriesConcepts.pdf) for the relation between the auto-covariance function and coefficients in the Wold representation. – Matthew Gunn Jan 25 '17 at 17:27
  • @Taylor BTW, I haven't done some of this stuff super recently, and there may be slicker ways to do the math. – Matthew Gunn Jan 25 '17 at 17:39
  • how do you get that quadratic from equating the two autocovariances? – Taylor Jan 25 '17 at 20:35
  • @Taylor I solved the system $\sigma^2_\epsilon + (1 +b^2) \sigma^2_\eta = (1 + a^2) \sigma^2_u$ and $ b \sigma^2_\eta = a \sigma^2_u$. Combining those (i.e. substitute out $\sigma^2_u$) gives the quadratic equation in $a$ i wrote above (and you can see it's quadratic by multiplying both sides by $a$). – Matthew Gunn Jan 25 '17 at 21:00
  • second into the first. okay I got the same thing – Taylor Jan 25 '17 at 22:44
  • 1
    okay I'm convinced, well done. I'll edit my answer – Taylor Jan 25 '17 at 23:04
1

You have a mistake, but you are correct in your last sentence. $\text{Cov}(\zeta_t,\zeta_{t+k})=0$ for $k>1$, not for $k \ge 0$. This quick cutoff is reminiscent of an MA model's autocovariance function, but you are right that they are not the same.

In particular

$$ \gamma(1) = \text{Cov}(\zeta_t,\zeta_{t+1})= -\phi \text{Var}(\eta_t) $$

and

$$ \gamma(0) = \text{Var}(\zeta_t) = \text{Var}(\epsilon_t) + \text{Var}(\eta_t)(1+\phi^2) . $$

If $\text{Var}(\epsilon_t) = 0$, then this autocovariance function would be the same as an MA(1).

Edit:

This is wrong. See above answer.

Taylor
  • 18,278
  • 2
  • 31
  • 66
  • I am going to drop Prof Weber an email and ask him. – Lost1 Jan 22 '17 at 21:37
  • Can you "model" process $\zeta_t$ though as $\zeta_t = u_t + \alpha u_{t-1}$ where $ \frac{1}{\alpha} + \alpha = - \frac{1}{\phi}\left( 1 + \frac{\sigma^2_\epsilon}{\sigma^2_\eta}\right) - \phi $ and $\sigma^2_u = -\frac{\phi}{\alpha} \sigma^2_\eta $? (If I got my algebra right...) – Matthew Gunn Jan 22 '17 at 22:27
  • Shouldn't there be a [Wold representation](https://en.wikipedia.org/wiki/Wold's_theorem) with one lag? – Matthew Gunn Jan 23 '17 at 15:50
  • In that link the $\eta_t$ is deterministic. Also the first comment I don't follow. Didn't check too carefully because it seems the question is misquoting the source: "$\xi_t$ can be modelled as a MA(1) process and $\{Y_t\}$ as an ARMA(1,1)$." He left off the second part – Taylor Jan 23 '17 at 18:52