3

I am wondering about the distribution of the error term/innovation process in a ARCH/GARCH process and its implementation, I am not sure about some points. The basic assumption is

$r_t=\sigma_t*\epsilon_t$

where the $\sigma_t$ is the volatility, modeled by ARCH/GARCH and the $\epsilon_t$ are mostly assumed to be N(0,1).

Now my questions are:

  1. More sophisticated models drop this assumption. So I can say, e.g. $\epsilon_t$ follows a generalized hyperbolic distribution. So the mean does not need to be zero and the variance does not need to be equal to 1. This is correct, right?

  2. If I use the rugarch package: It supports different distributional assumptions. But I am not getting the following: So they also drop the assumption of mean zero and variance one? Or are they using something like a "standardized" version?

  3. Suppose I want to fit a GARCH(1,1) assuming, that the $\epsilon_t$ follow a generalized hyperbolic distribution, but the mean does not have to be zero and the variance does not need to be one. Is rugarch doing a jointly parameter estimation? So in my final output, do I get the parameters of the GARCH process and the parameters of my generalized hyperbolic distribution?

My last question is, how can I implement this?

I guess I have to use the following command:

ugarchspec(variance.model = list(model = "sGARCH", garchOrder = c(1, 1), 
submodel = NULL, external.regressors = NULL, variance.targeting = FALSE), 
mean.model = list(armaOrder = c(1, 1), include.mean = TRUE, archm = FALSE, 
archpow = 1, arfima = FALSE, external.regressors = NULL, archex = FALSE), 
distribution.model = "norm", start.pars = list(), fixed.pars = list(), ...)

the distribution.model has to be set to ghyp. Is this assuming a mean of zero and a variance of one?

I think no, right?

How can I use the hyperbolic distribution for distribution.model?

Stat Tistician
  • 2,113
  • 4
  • 29
  • 54
  • You should take the time to read the rugarch vignette. All GARCH distributions are standardized (0,1) and the derivation is clearly shown in the document. –  Apr 27 '13 at 03:53
  • but why? Consider e.g. this article http://www3.stat.sinica.edu.tw/statistica/j20n3/j20n314/j20n314.html they also use a ghyp to fit the first moments better. Why should the assumption of standardization be made? – Stat Tistician Apr 27 '13 at 09:03
  • and do you mean this http://cran.r-project.org/web/packages/rugarch/vignettes/Introduction_to_the_rugarch_package.pdf ? On which page can I find it? It just shows the definition of the different GARCH processes and the different distributions, but it does not give the connection between both. – Stat Tistician Apr 27 '13 at 09:06
  • 1. All distributions used are location scale invariant. –  Apr 27 '13 at 09:11
  • this does not really help me – Stat Tistician Apr 27 '13 at 09:12
  • All distributions used are location and scale invariant. To model the conditional mean and variance you need to be able to have a distribution which can be standardized in such a way since the assumption is that the standardized residuals are ~(0,1). This is the most fundamental and basic of assumptions without whose understanding you are not likely to be able to proceed much further. –  Apr 27 '13 at 09:19
  • The vignette (the one your link points to) in fact displays all these assumptions, formula by formula in addition to deriving the method by which the moments of each distribution are related to its parameters of (possibly) location, scale, skew and shape and needed in estimatingthe parameters of the dynamics driving those moments under an ARMA-GARCH process. –  Apr 27 '13 at 09:20
  • as I posted you the article, there are also versions, in which people assume the error term to follow a distribution, without restricting the mean to be zero and the variance to be equal to one? – Stat Tistician Apr 27 '13 at 09:23
  • it seems, that the standardization which is used in the rugarch package contains error, see http://stats.stackexchange.com/questions/57409/correct-formula-for-standardized-students-t-distribution/57410?noredirect=1#57410 – Stat Tistician Apr 27 '13 at 09:50
  • There is nothing wrong with the standardizations used in the rugarch package. You can test the formulae (they are all exposed in the ddist, pdist, qdist and rdist functions) and check by numerical integration to see whether they give the correct answers. Since it is open source, make an effort to check the code for yourself before propagating such claims of "error" which is very unfair to the developer. As to the post about the 1/\sigma, this is the scaling by the GARCH conditional volatility. Finally, questions and help with the package are usually answered on r-sig-finance. –  Apr 27 '13 at 10:54
  • Code example:f = function(x) (x)*ddist("std", x, mu = 0.05, sigma=sqrt(0.04), shape=5); m1 = integrate(f, -Inf, Inf); f = function(x) (x-0.05)^2*ddist("std", x, mu = 0.05, sigma=sqrt(0.04), shape=5); m2 = integrate(f, -Inf, Inf); f = function(x) (x-0.05)^4*ddist("std", x, mu = 0.05, sigma=sqrt(0.04), shape=5); m4 = integrate(f, -Inf, Inf); #kurtosis (numerical integration); "m4$value/m2$value^2"; # kurtosis (analytic); (6/(5-4))+3 –  Apr 27 '13 at 11:03

0 Answers0