There are reasons for keeping the intercept (like mu
here) in conditional mean models even if it is not statistically significant. This has been discussed in several existing threads, e.g. "When is it ok to remove the intercept in a linear regression model?". However, @ChrisHaug might be right that If you are looking at financial market data at a fairly high frequency (say, daily or higher), it's quite common to have insignificant mean return. In that context, the application is rarely long-term forecasting where fixing mean return = 0 would ruin the forecast, and doing so is a common practice. In any case, if the mean is really small, then neither keeping it nor restricting it to zero should make a considerable difference.
omega
(the intercept of the conditional variance model) should be kept in the model for the following reasons.
- If you force
omega=0
and get alpha+beta<1
(by design of the estimation procedure that restricts the parameters to a stationary region defined by alpha+beta<1
), then your model implies the conditional variance is decreasing over time, which is generally undesirable.
- If you force
omega=0
and also explicitly force alpha+beta=1
, then you end up with an EWMA estimator of the conditional variance, and the conditional variance is a random walk (which again might be undesirable).
- Also, note that testing for
omega=0
is testing a hypothesis that the parameter is on the boundary of the space (omega
cannot be negative). This might have implications on the null distribution of the test statistic, making the regular $p$-value associated with the $t$-statistic inappropriate. There might be some relevant information in Francq & Zakoian "Testing the nullity of GARCH coefficients: correction of the standard tests and relative efficiency comparisons" (2009), but I am not entirely sure.