0

Given a sample of $n$ independent observations $\boldsymbol{y}$. Let $S(\boldsymbol{y})$ be a sufficient statistic for the underlying parameter $\boldsymbol{\theta}$ so that the density can be factorized as $f(y|\theta) = g(S(y)|\theta)h(y) $ for $S(y)=s$ we can write the posterior distribution of $\theta$ using bayes theorem.

$f(\boldsymbol{\theta}| \boldsymbol{y} ) = \dfrac{f(\boldsymbol{y}|\boldsymbol{\theta}) f(\boldsymbol{\theta})}{\int f(\boldsymbol{y|\boldsymbol{\theta}}) f(\boldsymbol{\theta}) d\boldsymbol{\theta}} $ = $\dfrac{h(\boldsymbol{y}) g(S(\boldsymbol{y})| \boldsymbol{\theta})f(\boldsymbol{\theta})}{\int h(\boldsymbol{y}) g(S(\boldsymbol{y})| \boldsymbol{\theta})f(\boldsymbol{\theta}) d\boldsymbol{\theta}} = \dfrac{g(s|\boldsymbol{\theta})f(\boldsymbol{\theta}) }{m(s)} = f(\boldsymbol{\theta}|s)$

where m(s) is the marginal distribution for s.

Why does the integral in the denominator evaluate to the marginal distribution of s ($m(s)$)?

the $h(y)$ factor will cancel in the denominator so we are left with $\int g(S(\boldsymbol{y})| \boldsymbol{\theta})f(\boldsymbol{\theta}) d\boldsymbol{\theta}$

does $g(S(y)=s|\theta) = f(s|\theta)$ ? because then it makes sense that we get the marginal distribution for s

J.doe
  • 357
  • 2
  • 8
  • 1
    Please tell us what assumption you are referring to--you haven't stated any assumptions in your post. As to whether anything is "obvious," that's a matter of opinion. – whuber Sep 03 '19 at 21:55
  • ok i tried to edit it as best as I could @whuber – J.doe Sep 03 '19 at 22:17

0 Answers0