15

Given two independent random variables $X\sim \mathrm{Gamma}(\alpha_X,\beta_X)$ and $Y\sim \mathrm{Gamma}(\alpha_Y,\beta_Y)$, what is the distribution of the difference, i.e. $D=X-Y$?

If the result is not well-known, how would I go about deriving the result?

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
FBC
  • 277
  • 1
  • 3
  • 8
  • I think may be relevant: http://stats.stackexchange.com/q/2035/7071 – dimitriy Jan 23 '13 at 21:14
  • 4
    Unfortunately not relevant, that post considers the weighted sum of Gamma random variables where the weights are strictly positive. In my case the weights would be +1 and -1 respectively. – FBC Jan 23 '13 at 21:17
  • The Moschopoulos paper claims that the method can be extended to linear combinations, but you are right that the rescaling seems to be restricted to weights greater than 0. I stand corrected. – dimitriy Jan 23 '13 at 21:41
  • There's little hope of deriving anything simple or in closed form unless the two scale factors are the same. – whuber Jan 23 '13 at 21:41
  • @whuber Perhaps you could critique my answer below for the case of different scale factors? – Dilip Sarwate Jan 25 '13 at 02:56
  • @Dilip You have implicitly assumed both shape parameters are integers; that assumption allows for a huge simplification. – whuber Jan 25 '13 at 14:55
  • @whuber I didn't make an _implicit_ assumption; everything that I wrote up to and including the last displayed integral holds for all shape parameters. For the development beyond that I _explicitly_ stated that it applied for the case when the shape parameters are integers. – Dilip Sarwate Jan 25 '13 at 15:18
  • @Dilip Sorry; I missed that--you are correct that you explicitly included the assumption of integrality. However, right up to that point you have only applied the definition of the sum and have not yet actually done any reduction of the problem, so there's still nothing to critique. – whuber Jan 25 '13 at 15:22
  • 3
    Just a small remark: for the special case of exponentially distributed rvs with the same parameter the result is Laplace (http://en.wikipedia.org/wiki/Laplace_distribution). – Richi W Mar 28 '13 at 09:57
  • 1
    maybe this helps: http://www.math.kit.edu/stoch/~klar/seite/veroeffentlichungen/media/note-vg-revision.pdf – bonanza Jun 12 '15 at 08:02
  • An interesting special case: http://math.stackexchange.com/questions/85249/distribution-of-difference-of-chi-squared-variables – Felipe G. Nievinski Jun 18 '16 at 18:14

3 Answers3

23

I will outline how the problem can be approached and state what I think the end result will be for the special case when the shape parameters are integers, but not fill in the details.

  • First, note that $X-Y$ takes on values in $(-\infty,\infty)$ and so $f_{X-Y}(z)$ has support $(-\infty,\infty)$.

  • Second, from the standard results that the density of the sum of two independent continuous random variables is the convolution of their densities, that is, $$f_{X+Y}(z) = \int_{-\infty}^\infty f_X(x)f_Y(z-x)\,\mathrm dx$$ and that the density of the random variable $-Y$ is $f_{-Y}(\alpha) = f_Y(-\alpha)$, deduce that $$f_{X-Y}(z) = f_{X+(-Y)}(z) = \int_{-\infty}^\infty f_X(x)f_{-Y}(z-x)\,\mathrm dx = \int_{-\infty}^\infty f_X(x)f_Y(x-z)\,\mathrm dx.$$

  • Third, for non-negative random variables $X$ and $Y$, note that the above expression simplifies to $$f_{X-Y}(z) = \begin{cases} \int_0^\infty f_X(x)f_Y(x-z)\,\mathrm dx, & z < 0,\\ \int_{0}^\infty f_X(y+z)f_Y(y)\,\mathrm dy, & z > 0. \end{cases}$$

  • Finally, using parametrization $\Gamma(s,\lambda)$ to mean a random variable with density $\lambda\frac{(\lambda x)^{s-1}}{\Gamma(s)}\exp(-\lambda x)\mathbf 1_{x>0}(x)$, and with $X \sim \Gamma(s,\lambda)$ and $Y \sim \Gamma(t,\mu)$ random variables, we have for $z > 0$ that $$\begin{align*}f_{X-Y}(z) &= \int_{0}^\infty \lambda\frac{(\lambda (y+z))^{s-1}}{\Gamma(s)}\exp(-\lambda (y+z)) \mu\frac{(\mu y)^{t-1}}{\Gamma(t)}\exp(-\mu y)\,\mathrm dy\\ &= \exp(-\lambda z) \int_0^\infty p(y,z)\exp(-(\lambda+\mu)y)\,\mathrm dy.\tag{1} \end{align*}$$ Similarly, for $z < 0$, $$\begin{align*}f_{X-Y}(z) &= \int_{0}^\infty \lambda\frac{(\lambda x)^{s-1}}{\Gamma(s)}\exp(-\lambda x) \mu\frac{(\mu (x-z))^{t-1}}{\Gamma(t)}\exp(-\mu (x-z))\,\mathrm dx\\ &= \exp(\mu z) \int_0^\infty q(x,z)\exp(-(\lambda+\mu)x)\,\mathrm dx.\tag{2} \end{align*}$$


These integrals are not easy to evaluate but for the special case $s = t$, Gradshteyn and Ryzhik, Tables of Integrals, Series, and Products, Section 3.383, lists the value of $$\int_0^\infty x^{s-1}(x+\beta)^{s-1}\exp(-\nu x)\,\mathrm dx$$ in terms of polynomial, exponential and Bessel functions of $\beta$ and this can be used to write down explicit expressions for $f_{X-Y}(z)$.


From here on, we assume that $s$ and $t$ are integers so that $p(y,z)$ is a polynomial in $y$ and $z$ of degree $(s+t-2, s-1)$ and $q(x,z)$ is a polynomial in $x$ and $z$ of degree $(s+t-2,t-1)$.

  • For $z > 0$, the integral $(1)$ is the sum of $s$ Gamma integrals with respect to $y$ with coefficients $1, z, z^2, \ldots z^{s-1}$. It follows that the density of $X-Y$ is proportional to a mixture density of $\Gamma(1,\lambda), \Gamma(2,\lambda), \cdots, \Gamma(s,\lambda)$ random variables for $z > 0$. Note that this result will hold even if $t$ is not an integer.

  • Similarly, for $z < 0$, the density of $X-Y$ is proportional to a mixture density of $\Gamma(1,\mu), \Gamma(2,\mu), \cdots, \Gamma(t,\mu)$ random variables flipped over, that is, it will have terms such as $(\mu|z|)^{k-1}\exp(\mu z)$ instead of the usual $(\mu z)^{k-1}\exp(-\mu z)$. Also, this result will hold even if $s$ is not an integer.

Dilip Sarwate
  • 41,202
  • 4
  • 94
  • 200
  • 2
    +1: Having looked at this problem before, I find this answer fascinating. – Neil G Mar 28 '13 at 05:57
  • I'm going to accept this answer even though there appears to be no closed form solution. It's as close as it gets, thanks! – FBC Jan 09 '14 at 16:26
  • I love the reasoning here, but I'm wondering if there is any measure where the second step breaks, I.e., $f_{-Y}(\alpha) ≠ f_{Y}(-\alpha)$? – mpacer Sep 29 '15 at 06:53
  • @mpacer No, $f_{-Y}(\alpha) = f_{Y}(-\alpha)$ _always_ holds. It is a general result that does not require any assumptions (normality, Gamma-eity, positive RV etc). For the special case of a positive random variable (that is, $P\{Y > 0\} = 1$), $-Y$ is a negative random variable that takes on values less than $0$ with probability $1$. – Dilip Sarwate Sep 29 '15 at 14:16
  • Surely, if you interpret the - operator differently that no longer is the case, so I'll rephrase my question. Is there an appropriate notion of `-` that differs from how we normally think about the real numbers such that the second step breaks, i.e., $f_{-Y}(\alpha) ≠ f_{Y}(-\alpha)$. Also for positive RVs it definitely breaks, since as you note $f_{Y}(-\alpha)$ would then be undefined or otherwise $f_{Y}(\alpha)$ would be undefined under normal interpretations of Positive (>0) RVs, i.e., you needed to make them into a different domain, which is a different function. – mpacer Sep 30 '15 at 07:16
  • 1
    @mpacer If $Y$ is a positive random variable with density $f_Y(\alpha)$, then it is _not true_ that $f_Y(\alpha)$ is _undefined_ for $\alpha<0$. In fact, $f_Y(\alpha)$ is **defined** as having value $0$ for $\alpha<0$. Thus, $f_{-Y}(\alpha)=f_Y(\alpha)=0$ for _all_ positive numbers $\alpha$, and the density of $Y$ is the density of $Y$ "flipped over" with respect to the origin (or vertical axis if you prefer.) I am not "interpreting" the $-$ operator differently, it is you who is demanding an "appropriate" notion of $-$ that will support your idea that the domain of $f_Y$ is $\mathbb R^+$ only – Dilip Sarwate Sep 30 '15 at 20:48
  • Thanks for this nice answer. I have a minor related question: suppose that $\alpha_x=\alpha_y$ and $\beta_x=\beta_y$. Is it correct to claim that $X-Y$ has density symmetric around zero? – TEX Jul 22 '20 at 10:07
  • @user3285148 Yes, that is correct. – Dilip Sarwate Jul 22 '20 at 14:37
11

To my knowledge the distribution of the difference of two independent gamma r.v.’s was first studied by Mathai in 1993. He derived a closed form solution. I will not reproduce his work here. Instead I will point you to the original source. The closed form solution can be found on page 241 as theorem 2.1 in his paper On non-central generalized Laplacianness of quadratic forms in normal variables.

Nathan Crock
  • 219
  • 3
  • 5
0

the difference of two independent or correlated Gamma random variables are special cases of McKay distribution. The exact and complete answer can be find in:

Sum and difference of two squared correlated Nakagami variates in connection with the McKay distribution Holm, H., Alouini, M.-S.

IEEE Transactions on Communications, 2004 Vol. 52; Iss. 8