Question: How is K-fold cross validation better than an implementation of 'grid-search' with repetitions when tuning hyper-parameters in a model?
Context: I am tuning hyperparameters in a model. I have implemented grid search in such a way as to try out different combinations of parameters, repeat each combination multiple times, and then average the results. For each repetition, the training and validation sets are being completely regenerated from scratch (taking the whole set and doing the splitting again). If I now am told to implement K-fold cross validation, how do I know which of these two methods will yield better results (and why)?
I understand that k-fold cross validation is meant to decrease the variance that we might see by doing a basic grid-search implementation whereby we just trial each parameter combination only once, but what about when each combination has been repeated? By repeating each combination more than once, have I implemented cross validation incorrectly? Including the repetitions seemed intuitive as the one trial seemed un-reliable.
Is a grid-search with k-fold cross validation likely to be better than a grid-search w/ repetitions but no k-fold cv?