0

I have a theoretical question. I understand the JAGS samples from the posterior function of a model. But I don't understand (nor I can find in the documentation) how it calculates the posterior in the first place (the function from which it later samples from using Gibbs).

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
user552231
  • 135
  • 8

1 Answers1

0

There's only one way of obtaining posterior distribution: by applying Bayes theorem. If your likelihood is $f(X|\theta)$ and the prior is $g(\theta)$, then the posterior is

$$ g(\theta|X) \propto f(X|\theta)\, g(\theta) $$

where the normalizing constant $f(X)$ is ignored, because it is not needed for MCMC, or optimization.

For example, if you assume that $X$ is distributed according to binomial distribution with known $n$ and unknown $p$, and you assume uniform prior for $p$. So if you want to calculate posterior probability of observing some particular value of $\theta$, you multiply binomial portability mass function evaluated at your data point $x$, with parameters $n$ and $p$, and multiply it with uniform probability density function evaluated at $\theta$. No black magic involved.

Tim
  • 108,699
  • 20
  • 212
  • 390