8

Please check my solution below for estimating Moving Average parameter using the Gauss-Newton (Linearization) method. I consider MA(1).

MA(1) model:

$$z_t=a_t-\theta_1a_{t-1}.$$

Solution:

The residual of this model is,

$$a_t=z_t+\theta_1a_{t-1}.$$ The residual sum of squares of this model is $$S(\theta_1)=\sum_{t=1}^Ta_t^2=\sum_{t=1}^T(z_t+\theta_1a_{t-1})^2.$$ We want to minimize the function of the residual sum of squares of this model. And since Moving Average is nonlinear then we use any nonlinear estimation technique, for now we use Gauss-Newton Method.

Gauss-Newton Procedure:

Let $\theta_1^{(0)}$ be the initial estimate of the model. Then, the residual sum of squares would be, $$S(\theta_1)=\sum_{t=1}^T(z_t+\theta_1^{(0)}a_{t-1})^2.$$

Since the $a_{t-1}$ are not available, then we need to recursively calculate for the $a_t$ using the initial estimate $\theta_1^{(0)}$. That is, $$a_1=z_1+\theta_1^{(0)}a_{0}$$ $$a_1=z_1+0, since\,\, a_{0}=0$$ Then, $$a_2=z_2+\theta_1^{(0)}a_1=z_2+\theta_1^{(0)}z_1$$ and so on. Then after obtaining this values, we can then compute the residual sum of squares.

The $E\left[z_t|a_{t-1}\right]$of MA(1) model is $E\left[a_t\right]-\theta_1E\left[a_{t-1}\right]=-\theta_1a_{t-1}.$ Approximating the $E\left[z_t|a_{t-1}\right]$ with a taylor series expansion about its initial guess value. We have, $$E\left[z_t|a_{t-1}\right]=E\left[z_t|a_{t-1}\right]+\left.\frac{\partial E\left[z_t|a_{t-1}\right]}{\partial \theta_{1}}\right|_{\theta_1=\theta_1^{(0)}}(\theta_1-\theta_1^{(0)})$$ Using the general form of the linear regression model: $$Y_t=f(X_t,\boldsymbol\beta)+e_t$$ The, $E\left[Y_t\right]=f(x_t,\boldsymbol{\beta})$ Then, plugging $E\left[z_t|a_{t-1}\right]$ to the model would become, $$Y_t=E\left[z_t|a_{t-1}\right]+\left.\frac{\partial E\left[z_t|a_{t-1}\right]}{\partial \theta_{1}}\right|_{\theta_1=\theta_1^{(0)}}(\theta_1-\theta_1^{(0)})+e_t$$ Then, $$Y_t-E\left[z_t|a_{t-1}\right]=\left.\frac{\partial E\left[z_t|a_{t-1}\right]}{\partial \theta_{1}}\right|_{\theta_1=\theta_1^{(0)}}(\theta_1-\theta_1^{(0)})+e_t$$ Let $\left.\frac{\partial E\left[z_t|a_{t-1}\right]}{\partial \theta_{1}}\right|_{\theta_1=\theta_1^{(0)}}$ be $D_{t1}^{(0)}$ and $(\theta_1-\theta_1^{(0)})$ be $\delta_{1}^{(0)}$. Then this is equivalent to,

$$z_t-(-\theta_1^{(0)}a_{t-1})=D_{t1}^{(0)}\delta_{1}^{(0)}+e_t$$ $$Y^{(0)}=D_{t1}^{(0)}\delta_{1}^{(0)}+e_t$$ Where: $Y^{(0)}=z_t-(-\theta_1^{(0)}a_{t-1})$ Hence by ordinary least square, the estimate of the $\delta_1^{(0)}$ is, $$\widehat{\delta}_1^{(0)}=(D_{t1}^{(0)})^{-2}D_{t1}^{(0)}Y^{(0)}.$$ The true estimate would then be, $$\theta_1^{(1)}=\theta_1^{(0)}+\widehat{\delta}_1^{(0)}$$ The $\theta_1^{(1)}$ will not be the final estimate but a new initial guess value for the parameter. This iterative procedure is done until convergence occur.

Is this correct?

Lucas Farias
  • 1,232
  • 1
  • 8
  • 22

0 Answers0