I am writing a paper that tests multiple hypotheses under a Bayesian framework.
The hypotheses are ~15 previously established hypotheses which I am retesting under a phylogenetic framework (to control for interdependencies between data). Effectively the procedure works like this, I have a hypotheses binary variable X co-evolves with binary variable Y. I then run two models, one where changes in X depends on changes in Y and vice-versa along the phylogeny, and a model where they can change independently and calculate a Bayes Factor to determine which model fits the data better.
I think that from a more philosophical standpoint this isn't an issue because I am testing previously established hypotheses, and this is effectively re-testing them under a different approach. Which is discussed here
However, is there also a statistical argument here that can be linked to this argument? Here, it seems to be suggested that Bayesian models (w/o flat priors which is the case for me) don't need to worry about multiple comparisons when making claims of confidence. While I can follow the simulations in the linked post, I don't really understand why this is the case, unless it is just that 95% credible intervals are just more conservative?
TLDR; Is there a statistical argument for not needing to correct for multiple comparisons when performing bayesian model comparisons tests?