Has anyone considered giving the posteriors of an analysis a sampling distribution and seeing where, methodologically, things could go from there?
For details, check out: https://sdba-stats.weebly.com
Has anyone considered giving the posteriors of an analysis a sampling distribution and seeing where, methodologically, things could go from there?
For details, check out: https://sdba-stats.weebly.com
I would say that is indeed done a lot. That said, it is correct that this is done less often in Bayesian work, since the focus tends to be more on finding a good answer given the data at hand, where sampling distributions concern behavior over samples that could have been.
As a simple example, Bayesian estimators like the posterior mean are typically not unbiased - a repeated sampling property. For instance, the expected value of the posterior mean for binomial data with true success probability $\theta_0$ and beta priors with hyperparameters $(\alpha_0,\beta_0)$ can be written as $$ E_{Y|\theta_0}\left[E(\theta|y)\right]=w\frac{\alpha _{0}}{\alpha_{0}+\beta_0}+E_{Y|\theta_0}\left[(1-w)\frac{k}{n}\right]=w\frac{\alpha _{0}}{\alpha_{0}+\beta_0}+(1-w)\theta_0\neq\theta_0 $$ with $k$ the number of successes and $$ w=\frac{\alpha _{0}+\beta_0}{\alpha_{0}+\beta_0+n} $$