0

I'm reading this code snippet related to RegressionL1loss implementation in LightGBM

  void GetGradients(const double* score, score_t* gradients,
                    score_t* hessians) const override {
    if (weights_ == nullptr) {
      #pragma omp parallel for schedule(static)
      for (data_size_t i = 0; i < num_data_; ++i) {
        const double diff = score[i] - label_[i];
        gradients[i] = static_cast<score_t>(Common::Sign(diff));
        hessians[i] = 1.0f;
      }
    }

at https://github.com/microsoft/LightGBM/blob/5b7a6f3e7150aeb704d1dd2b852d246af3e913a3/src/objective/regression_objective.hpp#L217-L225

I wonder why is the hessian of 1.0f, shouldn't it be 0, before L1 regression loss function?

zyxue
  • 847
  • 6
  • 21

0 Answers0