From my experience and also by reading papers I noticed that in the Machine Learning / applied statistics world, when tuning hyperparameters there are basically two approaches: (1) some simple search algorithms (e.g. Grid/Random Search) or (2) some complex methods (CMA-ES, TPE, Bayesian Optimization).
However, there are some methods that lie in the middle and belong to the sampling family such as Latin-Hypercube (n-Rooks), Jittered, Blue Noise, Multi-Jittered, and Quasi Monte Carlo (Halton, Sobol or similar Sequences).
Those methods are heavily used in other science fields such as computer graphics but not so much in ML. This can be seen for example in the scikit-learn Python package where there is no such sampling methods whereas GridSearch is present.
My question is then, are there reasons to not used them? Is there any survey/comparison piece of literature where this issue is addressed?