1

I have fitted model to my data and estimated parameters both using R and Matlab. Here are my results:

        Estimate  Std. Error  t value Pr(>|t|)
mu     -0.000188    0.000386 -0.48707 0.626206
omega   0.000002    0.000002  0.87820 0.379837
alpha1  0.062080    0.019779  3.13870 0.001697
beta1   0.925053    0.021205 43.62490 0.000000

Do I have to reject both mu and omega parameters? Is it really that bad? Does my model not fit the data well at all? Or have I missunderstood something?

Richard Hardy
  • 54,375
  • 10
  • 95
  • 219
Drac0
  • 11
  • 3

1 Answers1

0

There are reasons for keeping the intercept (like mu here) in conditional mean models even if it is not statistically significant. This has been discussed in several existing threads, e.g. "When is it ok to remove the intercept in a linear regression model?". However, @ChrisHaug might be right that If you are looking at financial market data at a fairly high frequency (say, daily or higher), it's quite common to have insignificant mean return. In that context, the application is rarely long-term forecasting where fixing mean return = 0 would ruin the forecast, and doing so is a common practice. In any case, if the mean is really small, then neither keeping it nor restricting it to zero should make a considerable difference.

omega (the intercept of the conditional variance model) should be kept in the model for the following reasons.

  1. If you force omega=0 and get alpha+beta<1 (by design of the estimation procedure that restricts the parameters to a stationary region defined by alpha+beta<1), then your model implies the conditional variance is decreasing over time, which is generally undesirable.
  2. If you force omega=0 and also explicitly force alpha+beta=1, then you end up with an EWMA estimator of the conditional variance, and the conditional variance is a random walk (which again might be undesirable).
  3. Also, note that testing for omega=0 is testing a hypothesis that the parameter is on the boundary of the space (omega cannot be negative). This might have implications on the null distribution of the test statistic, making the regular $p$-value associated with the $t$-statistic inappropriate. There might be some relevant information in Francq & Zakoian "Testing the nullity of GARCH coefficients: correction of the standard tests and relative efficiency comparisons" (2009), but I am not entirely sure.
Richard Hardy
  • 54,375
  • 10
  • 95
  • 219
  • If you would provide me link to one of threads i would be very greatful – Drac0 Nov 15 '16 at 19:55
  • @Drac0, got you the thread I had in mind -- see the updated answer. – Richard Hardy Nov 15 '16 at 20:06
  • @Drac0, I have not actually given an explicit answer or solid reasoning, so you could wait for alternative answers before accepting. But in any case, I stand by my suggestion not to worry about this in practice, so you can still proceed with your analysis using the model you have. – Richard Hardy Nov 15 '16 at 20:09
  • ok i will wait for more answers. – Drac0 Nov 15 '16 at 20:15
  • Could you please tell me that i understood that article correct: If someone will ask me about that insignicifant parameter i can answer something like: If i drop intercept parameter other parameters will be biased, by leaving the intercept term im insure that the residual term is zero-mean. Is that correct for my model? i know there in that article is all about regresion model – Drac0 Nov 15 '16 at 20:18
  • @Drac0, I cannot give you a definite answer. What I said is more about what is encouraged in practice but for this concrete model one would need to think hard and see how this can be justified. So I admit there is job to be done. And please delete your "answer" as it is not an answer, just a comment. Questions should be asked as proper questions (by opening a new thread), while clarifications can be solicited in comments. – Richard Hardy Nov 15 '16 at 20:18
  • Hmm... im fittin model to copper spot prices log returns. Theese returns are kinda low ... so maybe i can act like this is kinda normal here? – Drac0 Nov 15 '16 at 20:19
  • If you are looking at financial market data at a fairly high frequency (say, daily or higher), it's quite common to have insignificant mean return. In that context, the application is *rarely* long-term forecasting where fixing mean return = 0 would ruin the forecast, and doing so is a common practice. It may or may not be justified depending on the application. – Chris Haug Nov 15 '16 at 22:49