As far as I understand, the classic Dunnett test is based on the t-test, applied multiple times to all comparisons vs. control and then corrected. So the assumptions of the Dunnett test must agree with those for the t-test. Normal distribution, homogeneity of variances, equal sample sizes (in the idealized case). So how it is possible, that various statistical packages, like R or SAS allow to test Dunnett test on completely different models, like GLM, GLS, mixed models, GEE? These models may have completely different assumptions, for example GLM or GEE, which allow for non-normal distribution of the response variable or heteroscedasticity and does not care about equal sample sizes! I saw, just to focus the attention, that in R statistical package, there is a function multcomp::glht, or package emmeans, which can perform Dunnett analysis on more than 30 models!
So, how does it relate to the classic Dunnett test? Is this all the same Dunnett procedure? Exactly the same would refer to Tukey procedure. How is that possible to have Tukey (or Dunnett) classic test with its strict assumptions and Tukey (or Dunnett) procedure in those advanced multiple-comparison engines working on very liberal models?
Maybe the Dunnett's procedure is only a kind of "adjustment for p", like Hommel, Hochberg, Bonferroni? This would explain why it can be used to so many different models, including the historically simplest ones, the linear model (t-test).