11

Is it possible to have the PDF of the difference of two iid r.v.'s look like a rectangle (instead of, say, the triangle we get if the r.v.'s are taken from the uniform distribution).

i.e. is it possible for the PDF f of j-k (for two iid r.v.'s taken from some distribution) to have f(x) = 0.5 for all -1 < x < 1?

There are no restrictions on the distribution we take j and k from except that the min is -1 and the max is 1.

After some experimentation, I'm thinking this might be impossible.

Nathan
  • 113
  • 4
  • The difference of two uniform distributions is a triangular distribution, so if you ask if it is possible to get uniform of a difference of i.i.d. uniforms, then the answer is not. – Tim May 15 '18 at 06:48
  • Same Q asked here: https://math.stackexchange.com/questions/2048939/difference-between-two-iid-random-variables-is-not-uniformly-distributed so far without answers! – kjetil b halvorsen May 15 '18 at 07:48
  • It would indeed seem difficult to avoid realizations outside $[-1,1]$ when both $j$ and $k$ have probability mass close to these endpoints. – Christoph Hanck May 15 '18 at 09:54
  • @kjetilbhalvorsen thank you for sharing that! so it looks like it's impossible. maybe there's something close? – Nathan May 16 '18 at 13:23
  • @Tim, the iid r.v.'s do not have to be uniform – Nathan May 16 '18 at 13:23
  • 2
    It is not possible. To my recollection this is (in slightly different form) already answered somewhere on site. I'll see if I can locate it – Glen_b Jun 19 '18 at 10:07
  • 1
    @Glen_b You might be recalling https://stats.stackexchange.com/questions/125360/uniform-random-variable-as-sum-of-two-random-variables/125450#125450. It's not *quite* a duplicate, though, because a difference $X-Y$ of iid variables, although expressible as a sum $X+(-Y),$ could involve a sum of variables with non-identical distributions. I believe a trivial modification of my solution will address this difference; Silverfish's solution looks like it applies directly with almost no modification, but first one has to remove a lot of extraneous material to see that. – whuber Jun 19 '18 at 13:37
  • The one I was thinking of involved a sum, yes, but unless my thinking was astray it would still be adaptable to an answer. I thought the thing I remembered was either by chl or cardinal, but it's possible they actually answered in comments.(hence I wouldn't find it by the search). I'll look through your link though. Edit: that wasn't the one I had in mind, but it's a very nice answer -- and it should serve quite well. – Glen_b Jun 20 '18 at 00:24

2 Answers2

11

Theorem: There is no distribution $\text{Dist}$ for which $A-B \sim \text{U}(-1,1)$ when $A, B \sim \text{IID Dist}$.


Proof: Consider two random variables $A, B \sim \text{IID Dist}$ with common characteristic function $\varphi$. Denoting their difference by $D=A-B$. The characteristic function of the difference is:

$$\begin{equation} \begin{aligned} \varphi_D(t) = \mathbb{E}(\exp(i t D)) &= \mathbb{E}(\exp(i t (A-B))) \\[6pt] &= \mathbb{E}(\exp(i t A)) \mathbb{E}(\exp(-i t B)) \\[6pt] &= \varphi(t) \varphi(-t) \\[6pt] &= \varphi(t) \overline{\varphi(t)} \\[6pt] &= |\varphi(t)|^2. \\[6pt] \end{aligned} \end{equation}$$

(The fourth line of this working follows from the fact that the characteristic function is Hermitian.) Now, taking $D \sim \text{U}(-1,1)$ gives a specific form for $\varphi_D$, which is:

$$\begin{equation} \begin{aligned} \varphi_D(t) = \mathbb{E}(\exp(itD)) &= \int \limits_{\mathbb{R}} \exp(itr) f_D(r) dr \\[6pt] &= \frac{1}{2} \int \limits_{-1}^1 \exp(itr) dr \\[6pt] &= \frac{1}{2} \Bigg[ \frac{\exp(itr)}{it} \Bigg]_{r=-1}^{r=1} \\[6pt] &= \frac{1}{2} \frac{\exp(it)-\exp(-it)}{it} \\[6pt] &= \frac{1}{2} \frac{(\cos(t) + i \sin(t)) - (\cos(-t) + i \sin(-t))}{it} \\[6pt] &= \frac{1}{2} \frac{(\cos(t) + i \sin(t)) - (\cos(t) - i \sin(t))}{it} \\[6pt] &= \frac{1}{2} \frac{2i \sin(t)}{it} \\[6pt] &= \frac{\sin(t)}{t} = \text{sinc}(t). \\[6pt] \end{aligned} \end{equation}$$

where the latter is the (unnormalised) sinc function. Hence, to meet the requirements for $\text{Dist}$, we require a characteristic function $\varphi$ with squared-norm given by:

$$|\varphi(t)|^2 = \varphi_D(t) = \text{sinc}(t).$$

The left-hand-side of this equation is a squared norm and is therefore non-negative, whereas the right-hand-side is a function that is negative in various places. Hence, there is no solution to this equation, and so there is no characteristic function satisfying the requirements for the distribution. (Hat-tip to Fabian for pointing this out in a related question on Mathematics.SE.) Hence, there is no distribution with the requirements of the theorem. $\blacksquare$

Xi'an
  • 90,397
  • 9
  • 157
  • 575
Ben
  • 91,027
  • 3
  • 150
  • 376
4

This is an electrical engineer's take on the matter, with a viewpoint that is more suitable for dsp.SE rather than stats.SE, but no matter.

Suppose that $X$ and $Y$ are continuous random variables with common pdf $f(x)$. Then, if $Z$ denotes $X-Y$, we have that $$f_Z(z) = \int_{-\infty}^\infty f(x)f(x+z) \ \mathrm dx.$$ The Cauchy-Schwarz inequality tells us that $f_Z(z)$ has a maximum at $z=0$. In fact, since $f_Z$ is actually the "autocorrelation" function of $f$ regarded as a "signal", it must have a unique maximum at $z=0$ and thus $Z$ cannot be uniformly distributed as is desired. Alternatively, if $f_Z$ were indeed a uniform density (remember that it is also an autocorrelation function), then the "power spectral density" of $f_Z$ (regarded as a signal) would be a sinc function, and thus not a nonnegative function as all power spectral densities must be. Ergo, the assumption that $f_Z$ is a uniform density leads to a contradiction and so the assumption must be false.

The claim that $f_Z \sim \mathcal U[-1,1]$ is obviously invalid when the common distribution of $X$ and $Y$ contains atoms since in such a case the distribution of $Z$ will also contain atoms. I suspect that the restriction that $X$ and $Y$ have a pdf can be removed and a purely measure-theoretic proof constructed for the general case when $X$ and $Y$ don't necessarily enjoy a pdf (but their difference does).

Dilip Sarwate
  • 41,202
  • 4
  • 94
  • 200
  • 1
    Part of that doesn't seem right to me. The characteristic function of the $\text{U}(-1,1)$ distribution *is* the $\text{sinc}$ function, so clearly that kind of Fourier transform is allowable. Your logic seems to me to lead to [prove too much](https://en.wikipedia.org/wiki/Proving_too_much) - it appears to prove not only that $Z$ cannot be uniform, but that the uniform distribution cannot exist at all. Have I misunderstood? – Ben Jun 19 '18 at 23:58
  • 1
    Whether or not the characteristic function of $\mathcal U[-1,1]$ exists is not the the issue; it does exist. The pdf of $Z$ is an _autocorrelation_ function. Well, the _power spectral density_ of _any_ autocorrelation function _must_ be a nonnegative function. So, the _assumption_ that $f_Z \sim \mathcal U[-1,1]$ leads to a power spectral density which is a sinc function (that takes on both positive and negative values). Since this is not a valid power spectral density (remember that $f_Z$ is an autocorrelation function also), the assumption that $f_Z \sim \mathcal U[-1,1]$ must be false. – Dilip Sarwate Jun 20 '18 at 14:21