I'm reading this code snippet related to RegressionL1loss
implementation in LightGBM
void GetGradients(const double* score, score_t* gradients,
score_t* hessians) const override {
if (weights_ == nullptr) {
#pragma omp parallel for schedule(static)
for (data_size_t i = 0; i < num_data_; ++i) {
const double diff = score[i] - label_[i];
gradients[i] = static_cast<score_t>(Common::Sign(diff));
hessians[i] = 1.0f;
}
}
I wonder why is the hessian of 1.0f
, shouldn't it be 0, before L1 regression loss function?