6

The Metropolis–Hastings algorithm is a Markov chain Monte Carlo (MCMC) method for obtaining a sequence of random samples from a probability distribution for which direct sampling is difficult.

Would anyone have real-world, concrete examples for which direct sampling is difficult and M-H procedure is easy?

  • A mixture of two Gaussian distributions. – Xi'an Sep 26 '19 at 18:05
  • @Xi'an could you expand and show why direct sampling would be difficult in the case of two Gaussian distributions? – lalessandro Oct 21 '19 at 17:57
  • A [mixture of two Gaussian distributions](https://stats.stackexchange.com/q/387687/7224) is not the same as two Gaussian distributions. As a density, the corresponding posterior does not offer a natural interpretation, except when divided into a [sum of $2^n$ terms](https://stats.stackexchange.com/a/388700/7224), as in the EM algorithm. – Xi'an Oct 21 '19 at 18:14

1 Answers1

3

I don't have a great example off the top of my head, but MH is easy compared to direct sampling whenever the parameter's prior is not conjugate with that parameter's likelihood. In fact this is the only reason I have ever seen MH preferred. A toy example is that $p \sim \text{Beta}(\alpha, \beta)$, and you wanted to have (independent) priors $\alpha, \beta \sim \text{Gamma}()$. This is not conjugate and you would need to use MH for $\alpha$ and $\beta$.

This presentation gives an example of a Poisson GLM which uses MH for drawing the GLM coefficients.

If you don't already know, it might be worth noting that direct sampling is just the case of MH when we always accept the drawn value. So whenever we can direct sample we should, to avoid having to tune our proposal distribution.

  • 2
    This doesn't appear to answer the question ("concrete, real-world") –  Nov 01 '16 at 04:29