1

Reading "Applied Econometrics Time Series" By Walter Enders I am trying to derive the stationary AR(p) model as he does on page 58, fourth edition.

This is the AR(P) model

\begin{equation} y_t=a_0+\sum_{i=1}^pa_iy_{t-i} +\varepsilon_t \label{ARp} \end{equation}

Where he gets the result for all the roots of the homogenous equation lying inside the unit circle

\begin{equation} y_t=\frac{a_0}{1-\sum_{i=1}^pa_i} + \sum_{i=0}^\infty c_i\varepsilon_{t-i} \label{particular} \end{equation}

Where $c_i$ are the undetermined coefficients.

To practice as this is still new to me, I wanted to see if I could get the same results both by iterated substitution and by the lag model. My problem is, that I'm getting different results.

I don't know if this post is too long for a forum like this but here goes with my approach. Hope you can help me by pointing out my mistakes.

Here goes

Assuming we know the value of the first period 0; $y_0$, we can write by iterated substitutions

\begin{equation} y_t=a_1^ty_0 + a_0\sum_{i=0}^{t-1}\sum_{i=1}^pa_iy_{t-i} +\sum_{i=0}^{t-1}a_1^i\varepsilon_t \label{ARpiterated} \end{equation}

Taking the expected value, and because $\varepsilon_{t-\kappa}\sim IID(0,\sigma^2)$ for all $\kappa$, we get

\begin{equation} E(y_t) = a_1^ty_0 + a_0\sum_{j=0}^{t-1}\sum_{i=1}^pa_i^jy_{t-i} \end{equation}

If we have that $|a_i|<1$ we can express the sum $\sum_{j=0}^{t-1}a_i^j$ as an Maclurin series for an infinite geometric sum, factoring out $a_0$

\begin{equation} a_0[1+\sum_{i=1}^pa_i+\sum_{i=1}^pa_i^2+\sum_{i=1}^pa_i^3+\sum_{i=1}^pa_i^4...] \text{ converges to } \frac{a_0}{1-\sum_{i=1}^pa_i} \label{geometric} \end{equation}

And for $|a_1|<1$ the initial value $a-1^ty_0$ will converge to 0 and we therefore have by iterated substitution for $t\to\infty$ that

\begin{equation} y_t=\frac{a_0}{1-\sum_{i=1}^pa_i} + \sum_{i=0}^\infty c_i\varepsilon_{t-i} \end{equation}

Where $c_i$ are the undetermined coefficients. \

So that was one method. Enders is using the $c_i$ term. I'm still not sure about that one.

Here we go again

Second method:

We can write our AR(p) model as

\begin{equation} y_t=a_0 + a_1y_{t-1}+a_2y_{t-2}+...+a_py_{t-p}+\varepsilon_t \label{ARlang} \end{equation}

Using a lag operator where $Ly_t=y_{t-1}$ we get the lag polynomial

\begin{equation} (1-a_1L-a_2L^2-...-a_pL^p)y_t=\varepsilon_t + a_0 \end{equation}

Setting the polynomial equal to

[ a(L)=1-a_1L-a_2L^2-...-a_pL^p ]

we can write our AR(p) model as

\begin{equation} a(L)y_t=a_0+\varepsilon_t \label{lagARp} \end{equation}

By recognising that we can take the inverse of a(L) to get the geometric series we get

\begin{equation} a^{-1}(L) = \frac{1}{1-a(L)}=\sum_{i=0}^\infty a^iL^i \label{aminus} \end{equation}

By inserting this into the lag definition of the AR(p) we get

\begin{equation} y_t = a^{-1}(L)a_0 + a^{-1}(L)\varepsilon_t \label{inserted} \end{equation}

And we get the result

\begin{equation} y_t = \sum_{i=0}^\infty a^iL^i a_0 + \sum_{i=0}^\infty a^iL^i \varepsilon_t = \frac{a_0}{1-\sum_{i=1}^pa_i} + \sum_{i=0}^\infty a_i\varepsilon_{t-i} \label{insertedresult} \end{equation}\

So two different results. One has $c_i$ coefficients and one has $a_i$ coefficients.

Anders
  • 13
  • 3
  • 1
    that was interesting, but in the very, very last equation of your illustration, could you provide more details for the second equality. I don't see how either term arises but I'm not saying that you're wrong. thanks. – mlofton May 15 '20 at 02:12
  • Note that for the second term after the second equality, you took $a^{i}$ and converted it to $a_{i}$. Is that correct ? – mlofton May 15 '20 at 02:15

0 Answers0