Say I have a bunch of data from a Poisson distribution and I want to find out my posterior i.e. I'm data fitting:
$p(\lambda | X) \sim p(X|\lambda)p(\lambda)$
where $p(X|\lambda) = \frac{\exp(-\lambda)\lambda^x}{x!}$ so that my log-likelihood looks like:
$\log \mathcal{L}(\lambda|X) \sim x \log\lambda - \lambda$
Now as $\lambda > 0$, I transform my coordinates to be $\alpha = \log \lambda$. My new distribution looks like:
$p(X|\alpha) = \frac{\exp(-\exp(\alpha))\exp(\alpha x)}{x!}\cdot \bf{\exp(\alpha)}$
where the final $\exp(\alpha)$ comes from the Jacobian of the transformation.
This makes:
$\log \mathcal{L}(\lambda|X) \sim -\exp(\alpha) + \alpha x + \bf{\alpha}$
where the final $\bf{\alpha}$ in the new log-likelihood is from the earlier Jacobian.
The problem I'm having is that if I include that new $\alpha$ then my Metropolis-Hastings MCMC gives me a result that is incorrect. If I use a log-likehood that excludes it:
$\log \mathcal{L}(\lambda|X) \sim -\exp(\alpha) + \alpha x$
then I get correct results.
My question is: Why does the Metropolis-Hastings algorithm not care about the Jacobian?