1

I want to generalize the answer here to this case of a VAR(1) model. Suppose that $X_t \in \mathbb{R}^n$ and that $\Lambda \in \mathbb{R}^{n \times n}$.

If we have the stochastic process $\left\{ X_t, t = 1, 2, \ldots \right\}$ following the model $$X_t = \Lambda X_{t-1} + e_t \quad \text{ where } e_t \sim \mathcal N (0, \Sigma)$$ what is the distribution of the inital point $X_1$?

Using the idea proposed in the linked question, suppose that $B = \mathbb V (X_t) = \mathbb V (X_{t-1})$. Then, \begin{align} B &= \mathbb V (\Lambda X_{t-1} + e_t) \\ &= \Lambda B \Lambda^{\top} + \Sigma. \end{align}


For a simple example, let $n = 2$, $\Sigma = \begin{bmatrix} \sigma_1^2 & 0 \\ 0 & \sigma_2^2 \end{bmatrix}$, $\Lambda = \begin{bmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{bmatrix}$, and $B = \begin{bmatrix} x & y \\ z & w \end{bmatrix}$, where the free elements are non-zero.

Subsituting these in to the "fixed point" equation yields $$\begin{bmatrix} x & y \\ z & w \end{bmatrix} = \begin{bmatrix} \lambda_1^2 x + \sigma_1^2 & \lambda_1 \lambda_2 y \\ \lambda_1 \lambda_2 z & \lambda_2^2 w + \sigma_2^2 \end{bmatrix}$$ which implies that $$x = \frac{\sigma_1^2}{1 - \lambda_1^2} \quad y = \frac{\sigma_2^2}{1 - \lambda_2^2}$$ and $y = z = 0$.

So in this case the starting distribution would be $X_1 \sim \mathcal{N} \left(\begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} \frac{\sigma_1^2}{1 - \lambda_1^2} & 0 \\ 0 & \frac{\sigma_2^2}{1 - \lambda_2^2}\end{bmatrix} \right)$ provided $|\lambda_1| < 1$ and $|\lambda_2| <1$.

SOULed_Outt
  • 628
  • 5
  • 13

1 Answers1

1

For stationarity we require $\text{var} (X_2) = \text{var} (X_1) \equiv B$. This leads to the "fixed point" equation $$B = \Lambda B \Lambda^{\top} + \Sigma.$$ Using the work from @JarleTufto here the "fixed point" equation leads to: \begin{align} \text{vec} (B) &= \text{vec} (\Lambda B \Lambda^{\top} + \Sigma) \\ &= \text{vec} (\Lambda B \Lambda^{\top}) + \text{vec} (\Sigma) \\ &= (\Lambda \otimes\Lambda) \text{vec} (B) + \text{vec} (\Sigma) \end{align} Then, $X_1 \sim \mathcal N \left(0, B\right)$ where $B$ is found from the following equation: $$\text{vec} (B) = \left[ \text{Id} - (\Lambda \otimes \Lambda) \right]^{-1} \text{vec} (\Sigma).$$ If the inverse does not exist, then I believe that a psuedo-inverse will work just fine.


One nice thing I've noticed is that if $\Lambda$ and $\Sigma$ are both diagonal matrices, then the "fixed" point equation is fairly simple to solve.

$$B = \Lambda B \Lambda^{\top} + \Sigma$$ leads to $$B = \left( \text{Id} - \Lambda^2 \right)^{-1} \Sigma = \Sigma \left( \text{Id} - \Lambda^2 \right)^{-1}$$

SOULed_Outt
  • 628
  • 5
  • 13