I think if you look carefully you will find your answer in the posts that you have linked. The case of global test set
is the best case scenario often referred to as three-way-split
which is recommended for large datasets. However, for smaller datasets, this isn't feasible and you can revert to the k-fold
cross-validation. But taking care about following issues:
Use different k-fold
cross-validation for model selection, i.e., optimizing the parameters and determining the generalization capability of the model. First optimize model parameters with one round of cross-validation and second determine their generalization capability.
Since number of samples are small, try repeating k-fold
cross-validation with different random splits which helps eliminate variance.
This two methods above have been scientifically accepted across literature, especially repeated cross validation: The relevant literature is Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 390, 1261-1271 (2008). http://link.springer.com/article/10.1007%2Fs00216-007-1818-6
Caution
However, these procedures as you have hinted may not produce best results and there are considerable issues when reporting k-fold CV results as detailed here:Cross-validation misuse (reporting performance for the best hyperparameter value)