In this PyCon Canada tutorial, the author details a simulation process, in lieu of A/B testing, where two respective distributions are sampled from compared for X iterations. You can collect whatever metadata you like, such as:
- "Is $\theta_1 > \theta_2$?"
- Or "By how much is $\theta_1 > \theta_2$?"
Note, this tutorial assumed fully Bayesian posterior inference on $\theta_1$ and $\theta_2$.
My question is could this logic be applied from a non-Bayesian perspective. For example, assume that the click through rates (CTR) of two ad campaigns are approximately normally distributed, fit their normal PDFs via MLE, then sample a,b from their respective distributions, compare, and capture the difference for x iterations?
Would this be a viable alternative to NHST?