I have a machine learning algorithm with some hyperparameters. First, I split the data to 70% (A-set) and 30% (B-set).
Then, I used 5-fold cross-validation on the A-set to find the best hyperparameters.
Finally, I used 10-fold cross-validation on all data for reporting the performance of the algorithm.
Was my approach correct? If yes, is there any reference for it?
Is my approach biased?
Thanks in advance.