I'm relatively new to bayesian statistics and have been using JAGS recently to build hierarchical bayesian models on different datasets. While I'm very satisfied of the results (compared to standard glm models), I need to explain to non-statisticians what the difference with standard statistical models is. Especially, I would like to illustrate why and when HBMs perform better than simpler models.
An analogy would be useful, especially one that illustrates some key elements:
- the multiple levels of heterogeneity
- the need for more computations to fit the model
- the ability to extract more "signal" from the same data
Note that the answer should really be an analogy enlightening to non-stats people, not an easy and nice-to-follow example.