0

I am interested in deriving the full conditional for the mean parameter in a Neg-Binomial model with a Gamma prior on the mean, as such:

\begin{align*} Y|\lambda,\phi\sim & NB(\lambda,\phi)\\ \lambda\sim & Gamma(a,b) \end{align*}

where $\lambda$ is the mean parameter and $\phi$ is the dispersion:

\begin{align*} f_{Y}(y|\lambda,\phi)= & \frac{\Gamma(y+\phi)}{y!\Gamma(\phi)}\left(\frac{\lambda}{\lambda+\phi}\right)^{y}\left(\frac{\phi}{\lambda+\phi}\right)^{\phi} \end{align*}

I've been told that the full conditional ($p(\lambda|y,\phi)$) for $\lambda$ should have a closed form. However, I just can't see how this can be. So far all I've done is multiply $f_{Y}(y)$ by the Gamma prior on $\lambda$, and am not sure how to simplify beyond that. All help and tips greatly appreciated!

Min
  • 31
  • 1
  • Is this for self-study purposes? I'm asking because if yes, you might need to add the self-study tag – Vasilis Vasileiou Feb 05 '19 at 11:20
  • Have you inferred the density of $y | \lambda, \phi$ on your own, or has it been provided to you exactly? I ask because there are various parametrizations of the NB distribution. – Greenparker Feb 05 '19 at 18:49
  • Also, is there no prior on $\phi$? It would make sense for there to be a Gamma prior on $\phi$ as well, so that $\lambda / (\lambda + \phi)$ is Beta. – Greenparker Feb 05 '19 at 19:01
  • Yes, $\phi$ has a Gamma prior as well. I excluded it because I didnt think it would be relevant, but it seems it is! I use this parametrization for NB because it seems most Bayesians use it, i.e. for NB regression models. So to get the full conditional for $\lambda$ it seems I should massage around the Beta distribution for $\lambda/(\lambda + \phi)$? – Min Feb 05 '19 at 19:38
  • My initial suspicion (I'd be happy to be proven wrong) is that there is only a closed for (i.e. something I can write down without an integral) for known value of the dispersion parameter. Is that assumed? Or the prior on that parameter a mixture of finite point priors. – Björn Feb 05 '19 at 19:52

1 Answers1

1

To obtain a full conditional distribution $p(\lambda \mid y, \phi)$, you will need to make use of Bayes' Theorem.

$$p(\lambda \mid y, \phi) = \dfrac{f(y| \lambda, \phi) f(\lambda | \phi)}{ f(y \mid \phi) }\,, $$

where $f(y|\phi)$ is the marginal density of of $y$, after having integrated out $\lambda$. However, note that, $f(y | \phi)$ is just merely a constant, and not a function of $\lambda$ (since we are interested in $\lambda$ given y). So,

$$p(\lambda \mid y, \phi) \propto f(y| \lambda, \phi) f(\lambda | \phi)\,. $$

Here, the proportionality constant is $1/f(y|\phi)$ is ignored because it is uniquely determined to ensure $\lambda | y, \phi)$ is a proper density. So to find out what $p(\lambda | y, \phi)$, just write down both the densities, and remember, you can absorb all terms with $y$ and $\phi$ in to the proportionality constant.

See some discussions here, here.

Greenparker
  • 14,131
  • 3
  • 36
  • 80
  • 1
    I'm aware of this, the issue is that when they are multiplied together it can't be simplified into a familiar distribution due to the $\left(\frac{\lambda}{\lambda+\phi}\right)^{y}$ and $\left(\frac{\phi}{\lambda+\phi}\right)^{\phi}$ terms – Min Feb 05 '19 at 18:33