1

if there is a frequentist procedure, we can test if it is working as intended either by simulating many datasets or by taking many random samples from a real dataset and checking if p-values are uniformly distributed (if we know there is no real effect in our simulations), if p < 0.05 happens around 5% of times, and if confidence intervals have proper coverage.

As far as I know, bayesians don't care about p-values, or coverage, and maybe not even about random samples from a population. So what do then bayesians care about, and how to test if it works as intended in a simulation or real datasets? I would say that events with a posterior probability of 5% should happen around 5% of the ties, but that sounds awfully frequentist to me.

rep_ho
  • 6,036
  • 1
  • 22
  • 44
  • 3
    You may be interested in googling [posterior predictive checks](https://stats.stackexchange.com/questions/115157/what-are-posterior-predictive-checks-and-what-makes-them-useful). As a sidenote, Bayesians use a lot of frequentist statistics to verify their results. – Tim May 14 '21 at 13:23

0 Answers0