Given a sample of $n$ independent observations $\boldsymbol{y}$. Let $S(\boldsymbol{y})$ be a sufficient statistic for the underlying parameter $\boldsymbol{\theta}$ so that the density can be factorized as $f(y|\theta) = g(S(y)|\theta)h(y) $ for $S(y)=s$ we can write the posterior distribution of $\theta$ using bayes theorem.
$f(\boldsymbol{\theta}| \boldsymbol{y} ) = \dfrac{f(\boldsymbol{y}|\boldsymbol{\theta}) f(\boldsymbol{\theta})}{\int f(\boldsymbol{y|\boldsymbol{\theta}}) f(\boldsymbol{\theta}) d\boldsymbol{\theta}} $ = $\dfrac{h(\boldsymbol{y}) g(S(\boldsymbol{y})| \boldsymbol{\theta})f(\boldsymbol{\theta})}{\int h(\boldsymbol{y}) g(S(\boldsymbol{y})| \boldsymbol{\theta})f(\boldsymbol{\theta}) d\boldsymbol{\theta}} = \dfrac{g(s|\boldsymbol{\theta})f(\boldsymbol{\theta}) }{m(s)} = f(\boldsymbol{\theta}|s)$
where m(s) is the marginal distribution for s.
Why does the integral in the denominator evaluate to the marginal distribution of s ($m(s)$)?
the $h(y)$ factor will cancel in the denominator so we are left with $\int g(S(\boldsymbol{y})| \boldsymbol{\theta})f(\boldsymbol{\theta}) d\boldsymbol{\theta}$
does $g(S(y)=s|\theta) = f(s|\theta)$ ? because then it makes sense that we get the marginal distribution for s