1

I am currently trying to implement GARCH-M (garch in mean) model in Python (cannot use existing packages, and just want to understand the ground). I wanted to write not a big (but enough) piece of code for this purpose. I wrote the class, but I have some problems. Here they are:

1) Should I reparametrization of my parameters? And how to do it wisely? Is it good way is in the code? 2) Picking BFGS optimizer almost always leads to errors and no convergence, though I saw it being used in many packages. How can I solve it? Or is it typical? I am now using Nelder Mead optimization (not requiring gradient and hessian computations), but wanted to apply BFGS or related optimization alrogithms. 3) I am working with Quasi Maximum Likelihood, and wrote it in the loop. Am I correct at computing robust standard errors? I am asking because in R, rugarch package gives slightly different estimates and rather different robust standard errors. My SEs for several parameters seem to be too high.

The code is below

class GarchM():

  def __init__(self, params = None):

    if params != None:
      self.params = np.array(params)

    else:
      # 0 - mu, 1 - risk premium coefficient, 2 - omega in vol eq, 3 - alpha in vol eq, 4 - beta in vol eq
      self.params = np.array([1e-3, 0.5, 1e-6,  0.09, 0.9])

  def __repr__(self):

    return 'mu = {}, lambda = {}, omega = {}, alpha = {}, beta = {}'.format(*self.params)

  def repam(self, params_repam):

    return np.exp(params_repam)

  def inv_repam(self, params):

    return np.log(params)

  def filter(self, y):

    mu = self.params[0]
    lambda_ = self.params[1]
    omega = self.params[2]
    alpha = self.params[3]
    beta = self.params[4]

    r = y.dropna().values
    T = r.shape[0]

    r_hat = np.zeros(T) 
    eps_t = np.zeros(T) 
    sigma2 = np.zeros(T) 

    sigma2[0] = omega / (1 - alpha - beta)
    eps_t[0] = omega / (1 - alpha - beta)

    for t in range(1, T):

      sigma2[t] = omega + alpha * (eps_t[t - 1] ** 2) + beta * sigma2[t - 1]

      r_hat[t] = mu + lambda_ * sigma2[t]
      eps_t[t] = r[t] - r_hat[t]

    eps_t2 = eps_t ** 2

    return eps_t2,sigma2

  def log_likelihood(self, params_repam, y, single_output = False, neg_loglik = True):

    self.params = self.repam(params_repam)
    mu = self.params[0]
    lambda_ = self.params[1]
    omega = self.params[2]
    alpha = self.params[3]
    beta = self.params[4]

    r = y
    T = r.shape[0]

    r_hat = np.zeros(T)
    eps_t = np.zeros(T)
    sigma2 = np.zeros(T)

    sigma2[0] = omega / (1 - alpha - beta)
    eps_t[0] = omega / (1 - alpha - beta)

    loglik = np.zeros(T)

    for t in range(1, T):

      sigma2[t] = omega + alpha * (eps_t[t - 1] ** 2) + beta * sigma2[t - 1]

      r_hat[t] = mu + lambda_ * sigma2[t]
      eps_t[t] = r[t] - r_hat[t]

      loglik[t] = - 0.5 * (np.log(2 * np.pi) + np.log(sigma2[t]) + (eps_t[t] ** 2) / sigma2[t]) 

    self.loglik = np.sum(loglik)

    if single_output == False:
      return loglik[1:]

    else:
      if neg_loglik == True:
        return -1 * np.sum(loglik[1:])
      else:
        return np.sum(loglik[1:]) 

  def train(self, init_params, y, callback_func = None):

    self.n_obs = y.shape[0] 

    opt_result = opt.minimize(self.log_likelihood,
                           x0 = self.inv_repam(init_params),
                           args=(y, True, True ),  
                           method='BFGS',
                           callback = callback_func,
                           options = {'maxiter': 1000})

    self.params = self.repam(opt_result.x)
    print('\nResulting params = {}'.format(self.params))
    print('\nResults of BFGS minimization\n{}\n{}'.format(''.join(['-']*28), opt_result))

  def compute_se(self, y):

    scores = approx_fprime(self.inv_repam(self.params), self.log_likelihood, epsilon = None, args = (y, False))
    hessian = approx_hess(self.inv_repam(self.params), self.log_likelihood, epsilon = None, args = (y, True, False))
    ggs = np.array(list(map(lambda x: x.reshape(-1,1).dot(x.reshape(-1,1).T), scores)))
    GG = np.sum(ggs, axis = 0) 
    QQ = np.linalg.inv(-1 * hessian)
    coefs_cov = QQ.dot(GG).dot(QQ) 

    return coefs_cov
Steffen Moritz
  • 1,564
  • 2
  • 15
  • 22
  • The optimizers are taken from scipy, and approximate gradients and hessians are taken from statsmodels.tools.numdiff – Krainev Konstantin Jan 04 '20 at 17:46
  • the coefficients on my data seem to be almost OK (in comparison with rugarch package in R), but SEs are completely different: https://sun9-4.userapi.com/c857120/v857120851/a2f23/Ds4j03uFNZw.jpg https://sun9-13.userapi.com/c854328/v854328851/1b4038/woQdWfGDWJk.jpg – Krainev Konstantin Jan 05 '20 at 18:14

0 Answers0