I'm not sure you can consider this a complete answer so you can double check yourself, however, here goes.
By the definition of confidence intervals that
there's an $X\%$ chance that when computing the $X\%$ confidence intervals (CI) the true value $y$ will fall within computed CI,
then you can synthesize an experiment where you know the true parameter values $y$, and you simulate the noise (based on the assumed likelihood function) let's say $P=1000$ times. When you do the fit and compute the $X\%$ confidence intervals, $y$ should fall within the CIs $X\%$ of the time. If this fails or succeeds in a significant way, then it affects a decision.
On the other hand, given the definition of the credible intervals where
Given the observed data, there is a $X\%$ probability that the true value $y$ falls within the $X\%$ credible interval
it means that
- you must synthesize $P$ different parameters $\{y_p\}_{p=1,\ldots,P}$ (which are your true values),
- solve using a Bayesian estimator $P$ times,
- compute the $X\%$ credible intervals for each ($P$ times),
- and expect that $X\%$ of the true values $y_p$, should fall within the credible intervals.
Note: $P$ and $X$ are the same in the aforementioned scenarios.
So to summarize, to be able to compare credible intervals to confidence intervals fairly, you need to follow their definitions. In the frequentist approach you assume a fixed set of parameters (remember frequentists assume parameters are fixed) and simulate noise in the measurements (data), whereas in the Bayesian approach you assume your data is fixed, so you must ``randomize'' the parameters. If you follow this approach credible and confidence intervals can be compared fairly (no matter the prior distribution).