I have a basic question on using cross-validation for model parameter tuning (model training) and model evaluation (testing) similar to this Model Tuning and Model Evaluation in Machine Learning
I understand that it is suggested to only use the training set (test set remain 'unseen') to tune the model parameter ('mtry', I am using Random Forest (RF)) i.e. training set is split further into training and validation set to do k-fold cross validation to obtain optimum parameter value.
However, I am confused if I then wish to do k-fold cross validation to evaluate the model accuracy (to test the trained model on different test sets sampled from the whole dataset). Is the right model evaluation procedure is:
(1) Simply rerun RF, with the parameter 'mtry' tuned by CV only using training set, to different training-test set partitions? Although only 1 (one) realization/partition of training set is used to tune 'mtry' at the beginning? OR should I tune 'mtry' using different training set realizations to begin with?
(2) Run RF with the tuned 'mtry' on different bootstrap samples from the 1 (one) realization of test set (at the beginning) not used to tune 'mtry'?
Thank you and sorry if my writing is a bit confusing.