I am doing a method comparison of some machine learning models across certain scenarios. I simulated data where associations are known. To me, this seems like a simple way to have as much data as I want to train, tune, and test models (over and above the obvious benefit of knowing the exact structure of the data).
However, the idea of k-fold cross validation during tuning is engrained and I wanted to ask others for input.
Can I train and tune these ML models by just using a different seed in simulation than the test set, or do I need to use k-fold cross validation?
Thanks