In other words, why is that when estimating in EViews
y = c ar(1)
yields a different coefficient for c when compared to
y = c y(-1)
although the coefficient for ar(1)
and y(-1)
are the same?
In other words, why is that when estimating in EViews
y = c ar(1)
yields a different coefficient for c when compared to
y = c y(-1)
although the coefficient for ar(1)
and y(-1)
are the same?
Consider the model
\begin{align*}y_t&=\psi+\tilde{y}_t\\
\tilde{y}_t&=\rho \tilde{y}_{t-1}+u_t
\end{align*}
Rearrange this to $y_t-\psi=\tilde{y}_t$, insert and solve for $y_t$ to get
$$
y_t=\psi(1-\rho)+\rho y_{t-1}+u_t
$$
Compare this with the model
$$
y_t=c+\rho y_{t-1}+u_t.
$$
I do not have access to eviews, but I believe the first model corresponds to approach y c ar(1)
(estimating $\psi$) and the second to y c y(-1)
.
I haven't checked thoroughly, but a similar discrepancy also seems to be inherent in the following approaches in R:
library(nlme)
library(dynlm)
y <- 2 + arima.sim(list(ar=0.8), n=10000)
gls(y~1, corr=corAR1(0.5,form=~1))
coef(dynlm(y~L(y)))
Output:
> gls(y~1, corr=corAR1(0.5,form=~1))
Generalized least squares fit by REML
Model: y ~ 1
Data: NULL
Log-restricted-likelihood: -14269.74
Coefficients:
(Intercept)
2.044563
Correlation Structure: AR(1)
Formula: ~1
Parameter estimate(s):
Phi
0.7951246
Degrees of freedom: 10000 total; 9999 residual
Residual standard error: 1.661907
> coef(dynlm(y~L(y)))
(Intercept) L(y)
0.4189489 0.7949498
In particular, $2.044563\cdot(1-0.7951246)\approx0.4189489$ (discrepancies arise from gls
using an iterative ML approach).