What is the difference between QMLE and MLE method to estimate GARCH parameter? Because both maximizes the same log likelihood (?)
I tried to estimate the GARCH(1,1) parameter by using quasi maximum likelihood method with t-student distribution as the error distribution. I used BHHH algorithm to optimize the process. Unfortunately, it didn't converge. Is there any better method to do the QMLE using t-student distribution?
Asked
Active
Viewed 129 times
0

Richard Hardy
- 54,375
- 10
- 95
- 219

reiz
- 1
-
reiz, welcome to Cross Validated! If you find my answer helpful, you may accept it by clicking on the tick mark to the left. If it is lacking, you may ask for clarification by commenting. This is [how Cross Validated works](https://stats.stackexchange.com/tour). – Richard Hardy Feb 12 '20 at 14:03
1 Answers
0
- QMLE is typically constructed by assuming the standardized innovations are normally distributed. Even though the user may not think this assumption is a good approximation to reality, this makes the computations easy. See more on the intuition behind QMLE in this thread. Meanwhile, MLE can assume any distribution (whatever the user believes approximates the reality well), be it Normal or not. If they maximize the same likelihood, it means the likelihood is taken from MLE and the method is simply the MLE; calling it QMLE would not be accurate, I think.
- The information you provide is too scarce to be able to offer a definite solution for this problem. In general, you can try another algorithm or another set of starting values. There are a number of optimization algorithms available, see e.g. the documentation of the
rugarch
package in R and p. 46 of the corresponding vignette for some alternatives (a useful keyword is "solver").

Richard Hardy
- 54,375
- 10
- 95
- 219