0

I created a new task with TaskRegr$new, a learner with lrn('regr.ranger'), a search space with ps, fed those to an AutoTuner$new and then ran resample using resampling=rsmp('cv', folds=4)

So that looks like

rr1 <- resample(task=task_allcols, learner=at1, resampling=rsmp('cv', folds=4), store_models = TRUE, store_backends = TRUE)

when I do rr1$score I get 4 rows each of which has a unique MSE

when I do rr1$learners[[x]] where x is 1,2,3, or 4 the MSE in those responses don't match any of the preceding values.

What is the difference between these two outputs?

Dean MacGregor
  • 956
  • 1
  • 7
  • 10
  • Are you referring to the MSE values from tuning in `rrl$learners[[1]]$model$tuning_instance$result`? These are the MSE values that are obtained in the inner cross-validation. `rrl$score` shows the MSE values that are obtained once the best found hyperparameter configuration is used to train the model on the complete training data from the outer cv. – jakob-r May 14 '21 at 07:37
  • @jakob-r Thanks for that, you're right I did mean `rrl$learners[[1]]$model$tuning_instance`, sorry I left out the last bit. How do I get the hyperparameters used for `rr1$score` then? – Dean MacGregor May 14 '21 at 14:50
  • In your 1st outer loop, the model is trained with the hyperparameters obtained during tuning. You can verify that comparing `rrl$learners[[1]]$model$tuning_instance$result` or `...$result_learner_param_vals` and `rrl$learners[[1]]$model$learner$param_set$values` which are identical. The score values are different because during tuning we eval using the inner resampling (what you pass to AutoTuner) and during resample we eval using the outer resampling (4 fold cv in your example). – jakob-r May 15 '21 at 21:12

0 Answers0