It's a commonly quoted result that frequentist confidence intervals are equivalent to a bayesian credible interval assuming a flat prior. Ignoring for now questions about invariance under reparameterization or reasonability of a flat prior over the real line in practice, why is this true mathematically?
Let $X$ be random vector representing our data. $f_\theta$ the distribution of $X$ conditional on some value of $\theta$. Suppose $a(X), b(X)$ are functions such that
$$ \int_{X \mid a(X) < \theta < b(X)} f_{\theta}(X) dX = 1 - \alpha$$
Then, for some realization of the data $X = x$, the interval $[a(x), b(x)]$ is a confidence interval of level $\alpha$.
I need to go from that to $$\Pr(a(x) < \theta < b(x) \mid X = x) = 1 - \alpha$$ and have no idea how. Bayes theorem gives proportionality, but seemingly no more. I suspect I’m not understanding something about the rigorous treatment of flat priors.