$X_t$ is an ARMA(p,q) process. How do you derive the conditional variance on $X_s$ $\forall s<t$, i.e $$Var({X_t|X_s, s<t})=?$$
2 Answers
There is no simple formula for this, but the variance can be computed using known algorithms. You can find an algorithm for computing the auto-covariance in Brockwell and Davis (1991) (Section 3.3). Assuming you are working with a Gaussian ARMA process, this can be used to find the conditional variance using standard results for the conditional variance of the multivariate normal distribution.
You can compute the conditional covariance matrix for the Gaussian ARMA process using the ARMA.var
function in the ts.extend
package in R
. This package also contains broader probability functions for stationary Gaussian ARMA processes. Here is an example of the conditional variance matrix for a Gaussian $\text{ARMA}(2,3)$ process where we condition on the value at time $s=1$ and determine the conditional variance matrix over times $t=2,3,4,5$.
#Set the parameters for an ARMA process
AR <- c(0.8, 0.1)
MA <- c(0.6, 0.4, 0.4)
#Set conditioning values
N <- 5
CONDVALS <- rep(NA, N)
CONDVALS[1] <- 6
#Compute the conditional variance
library(ts.extend)
ARMA.var(n = N, condvals = CONDVALS, ar = AR, ma = MA)
Time[2] Time[3] Time[4] Time[5]
Time[2] 1.401380 1.996988 2.309347 2.447176
Time[3] 1.996988 3.940649 4.793826 5.189126
Time[4] 2.309347 4.793826 6.949907 7.847309
Time[5] 2.447176 5.189126 7.847309 10.019159

- 91,027
- 3
- 150
- 376
I don't know if this is a standard procedure, but I think this should work:
Starting from the representation $\theta (B) X = \phi (B) \epsilon$ for $X \sim \operatorname{ARMA}(p,q)$, let's assume we know the roots of $\phi(z)$ (these are usually computed anyway), then we can represent $\phi(B) = c \prod_{k=1}^q (B - \lambda_k )$, and then using the Neumann series $$ (B - \lambda)^{-1} = \sum_{k=0}^\infty (-\lambda)^{-k-1} B^k , $$ we get the $\operatorname{AR}(\infty)$-representation $$ \frac{\theta (B)}{\phi (B)} X=\epsilon \\ \left( c^{-1} \theta(B)\prod_{k = 1}^q \sum_{n_k=0}^\infty (- \lambda_k)^{-n_k - 1} B^{n_k} \right) X = \epsilon \\ c^{-1} \sum_{n = 0}^\infty \left( \sum_{n_1 + \cdots + n_q = n} \prod_{k=1}^q (- \lambda_k )^{-n_k-1} \right) B^{n} \theta(B) X = \epsilon . $$
This should be no problem to evaluate this numerically for some orders of $n$ up to $m$. Then, given such an $\operatorname{AR}(m)$ approximation $\widetilde \theta (B) X \approx \epsilon$, you can compute the conditional variance in a straight-forward manner: $$X_{t+h} \vert \mathcal F_t = \dots + \epsilon_{t+h} + \widetilde \theta_1 \epsilon_{t+h-1} + (\widetilde \theta_1^2 + \widetilde \theta_2) \epsilon_{t+h - 2} + (\widetilde \theta_3 + \widetilde \theta_2 \widetilde \theta_1 + \widetilde \theta_1 \widetilde \theta_2 + \widetilde \theta_1^3) \epsilon_{t+h-3} + \dots , $$ yielding the conditional variance $$\operatorname{var}(X_{t+h} \vert \mathcal F_t ) = \sigma_\epsilon^2 \left\lbrace \widetilde \theta_1^2 + (\widetilde \theta_1^2 + \widetilde \theta_2)^2 + (\widetilde \theta_3 + \widetilde \theta_2 \widetilde \theta_1 + \widetilde \theta_1 \widetilde \theta_2^2 + \widetilde \theta_1^3)^2 + \dots \right\rbrace $$ (terms up to $(\widetilde \theta_h + \widetilde \theta_{h-1} \widetilde \theta_1 + \dots)^2$)

- 238
- 2
- 4
-
This has the potential to be a good answer, but I have given it (-1) for now. The main problem here is that the substance of the answer is glossed over with "no problem to evaluate this numerically" and "compute in a straightforward manner". I think that most readers on this site will need more guidance on those steps in order to implement this method. If you can edit to improve then I will happily remove my downvote and replace with an upvote. (An example implementing it would also be wonderful if possible.) – Ben Aug 11 '21 at 23:05