if there is a frequentist procedure, we can test if it is working as intended either by simulating many datasets or by taking many random samples from a real dataset and checking if p-values are uniformly distributed (if we know there is no real effect in our simulations), if p < 0.05 happens around 5% of times, and if confidence intervals have proper coverage.
As far as I know, bayesians don't care about p-values, or coverage, and maybe not even about random samples from a population. So what do then bayesians care about, and how to test if it works as intended in a simulation or real datasets? I would say that events with a posterior probability of 5% should happen around 5% of the ties, but that sounds awfully frequentist to me.