To my understanding Approximate Bayesian Computation (ABC) and Markov Chain Monte Carlo (MCMC) have very similar aims. Below I describe my understanding of these methods and how I perceive the differences in their application to real life data.
Approximate Bayesian Computation
ABC consists at sampling a parameter $\theta$ from a prior, through numerical simulation compute a statistic $x_i$ which is compared to some observed $x_{obs}$. Based on a rejection algorithm, $x_i$ is either retained or rejected. The list of retained $x_i$s made the posterior distribution.
Markov Chain Monte Carlo
MCMC consists at sampling a prior distribution of the parameter $\theta$. It takes a first sample $\theta_1$, compute $P(x_{obs} | \theta_1)P(\theta_1)$ and then jump (according to some rule) to a new value $\theta_2$ for which $P(x_{obs} | \theta_2)P(\theta_2)$ is computed again. The ratio $\frac{P(x_{obs} | \theta_2)P(\theta_2)}{P(x_{obs} | \theta_1)P(\theta_1)}$ is calculated and depending on some threshold value, the next jump will occur from the first or the second position. The exploration of $\theta$ values goes one and one and by the end, the distribution of retained $\theta$ values is the posterior distribution $P(\theta | x)$ (for a reason that is still unknown to me).
I realize that my explanations miss to represent the variety of methods that exists under each of these terms (especially for MCMC).
ABC vs MCMC (pros and cons)
ABC has the advantage that one does not need to be able to analytically solve $P(x | \theta)P(\theta)$. As such ABC is convenient for complex model where MCMC would not make it.
MCMC allows to make statistical tests (likelihood ratio test, G-test, ...) while I don't think this is feasible with ABC.
Am I right so far?
Question
- How do ABC and MCMC differ in their applications? How does one decide to make use of one or another method?