As far as I know, it is a good idea to retrain the model on all the data available (train, validation, and test) after finding the best Hyperparameters values by Cross-Validation.
However, some hyperparameters are sensitive to the dataset size, for example the regularization parameter.
According to that, should I retrain the model on the whole available data using the values of the hyperparameters I have found, or should I retrain the model on the amalgamation of the training and test set (this will be the new training set), and tuning the regularization parameter using the Cross-Validation and the validation set?
- The model that I'm using is XGBOOST classifier.