5

Suppose we have a probability distribution $f(x)$ with a finite support $[a,b]$. If we take the probability convolution of $\lambda f $ with $(1-\lambda)f,0 <\lambda<1$ recursively for many times, does the resulting distribution converges to the Dirac-delta distribution at the mean of $X$?

To be more specific: suppose $f_1(x)$ is probability distribution resulting form the convolution of $\lambda f $ with $(1-\lambda)f$, the second convolution would be $\lambda f_1 $ with $(1-\lambda)f_1$ and so forth...

Alternatively this can be explained in terms of random variables. First we use $'$ to denote the independent copy of a random variable, so $X'$ is an independent copy of the random variable $X$. Let $Y_0=\lambda X+(1-\lambda)X'$, $Y_1=\lambda Y_0+(1-\lambda)Y_0'$ ...,$Y_n= \lambda Y_{n-1}+(1-\lambda)Y'_{n-1}... $. Does $Y_n$ converge to a Dirac-delta distribution at the mean of $X$?

Could someone help with a formal proof? I tried to run some simulation with a discrete probability distribution with three outcomes. It seems it would converge as I increase the converge times from 1 to 3 times . But trying $4$ times crashes my laptop...

Emma
  • 349
  • 1
  • 5
  • What is convolution? It looks like a simple addition of random variables. – dodo Feb 10 '22 at 14:58
  • Probably, you could consider the simple case of $\lambda=0.5$ first. – dodo Feb 10 '22 at 14:59
  • If the result is true, then this is very interesting, I never see this result in literature. Probably, you could also ask on the mathoverflow, where people are more familiar with the literature. Here, a lot of people are on the applied side. – dodo Feb 10 '22 at 15:03
  • have a look at the MGF https://en.wikipedia.org/wiki/Moment-generating_function ( what is mgf of final distribution? what is mean and variance deriving from mgf – seanv507 Feb 10 '22 at 16:31
  • 2
    Perhaps I have misunderstood: I take it that you assume an *iid* sequence of variables $X_n$ and you define $Y_0=X_0$ and $Y_{n+1}=\lambda Y_n + (1-\lambda)X_{n+1}$ for $n=1,2,\ldots.$ Unless $f$ has zero variance, the variance of $Y_n$ cannot be any smaller than $(1-\lambda)^2$ times the variance of $f,$ making it obvious $Y_n$ cannot converge to a constant in any sense. – whuber Feb 10 '22 at 16:37
  • @dodo That makes little sense, for it's equivalent to $Y_{n+1}=Y_n.$ Typo? – whuber Feb 10 '22 at 17:22
  • 1
    This is not an answer yet but I'm not allowed to comment. I don't quite understand the question tbh. Maybe you should clarify your terminology and what you're really asking. Firstly, I assume that you want to convolute a probability density function and not a distribution, don't you? Or your support $[a, b]$ is misleading because a distribution is defined over a set of sets. Secondly, the convolution of two functions is a function itself but the mean is not a function but a value. As such the answer to your question would be "No" as a sequence of functions can only converge towards a function. – Markov Feb 10 '22 at 17:10
  • 2
    @Markov Random variables are functions. This includes constant functions. The question refers to convergence to a particular constant function. – whuber Feb 10 '22 at 17:24
  • @whuber I made an example in the answer. I hope it works. – dodo Feb 10 '22 at 17:34
  • Also asked at https://mathoverflow.net/questions/415848/does-convolution-of-a-probability-distribution-with-itself-converge-to-its-mean – kjetil b halvorsen Feb 11 '22 at 05:03

2 Answers2

3

$Y_0=\lambda X + (1-\lambda) X'$ so

$\text{var}(Y_0) = (\lambda^2 + (1-\lambda)^2)\text{var}(X)$

define $v = (\lambda^2 + (1-\lambda)^2)$

note $0.5< v < 1$ for $0< \lambda<1$

$Y_{1}=\lambda Y_0 + (1-\lambda) Y_0'$ and we have the general pattern

$Y_{n+1}=\lambda Y_n + (1-\lambda) Y_n'$

since $Y_n$ and $Y_n'$ are independent copies

$\text{var}(Y_{n+1})= v Y_n$

$Y_{n+1}=\lambda Y_n + (1-\lambda) Y_n'$ so

$\text{var}(Y_{n}) = v^{n+1}\text{var}(X)$ so variance goes to zero and $\text{mean}(Y_n)=\text{mean}(X)$

seanv507
  • 4,305
  • 16
  • 25
  • Very interesting! May I ask how can I find the formula for the variance of convolution? Have done some google search but found nothing. – dodo Feb 10 '22 at 18:03
  • 1
    you need to search for variance of sum of independent variables. The convolution is a way fo determining the distribution of a sum of independent variables. https://www.statlect.com/fundamentals-of-probability/sums-of-independent-random-variables – seanv507 Feb 10 '22 at 18:07
  • How, exactly, do you obtain that final formula for the variance, and what exactly do you mean by "$Y_n^\prime$"? The latter has not been defined in the question--only $Y_0^\prime$ has been. – whuber Feb 10 '22 at 18:10
  • 1
    @whuber, the question is using ... and 'and so forth' to define an iterative pattern which has been defined for $Y_0, Y_1$. I have extended answer to explain how I derive variance (by solving recursive formula) – seanv507 Feb 10 '22 at 18:46
  • Upon rereading the question, I agree with your interpretation. (+1) – whuber Feb 10 '22 at 19:20
  • 2
    thank you. I agree with the calculation of variance. Though to get the answer needed for this post, the following statement is needed : the variance of a rv is equal to 0 if and only if this rv is almost a constant. see here: https://proofwiki.org/wiki/Random_Variable_has_Zero_Variance_iff_Almost_Surely_Constant – Emma Feb 11 '22 at 01:42
0

This is just an example that your statement seems to be wrong, at least in some scenarios.

Suppose there are two same but independent distributions:

$Y_0$: 50% we get 1, 50% we get 0.

$Y_0'$: 50% we get 1, 50% we get 0.

Let $Y_1=1/2Y_0+1/2Y_0'$

Then $Y_1$ is: 25% we get 1, 50% we get 0.5, 25% we get 0.

The 25% comes from $50\%\times50\%$.

Then we have $Y_1$' totally same as $Y_1$. $Y_1'$ and $Y_1$ are independent.

By repeating, it seems that the probability of $Y_n=0.5$ can never exceed $50\%$

dodo
  • 133
  • 3
  • 2
    This does not demonstrate lack of convergence, however! It is quite possible for every random variable in a sequence to have *zero* chance to equal their common mean, yet for that sequence to converge to a constant. The standard example is a sequence of averages of *iid* standard Normal variables. – whuber Feb 10 '22 at 17:52
  • @dodo I don't see it from this example. To simplify the notation I write Y1 as (1,25%; 0.5,50%;0,25%). I get Y2= (1,6.25%; 0.75,25%; 0.5,37.5%;0.25,25%,0,6.25%). Indeed 0.5 has a lower probability than 50%. But the weighted sum of other values(e.g, 0.75) would converge to 0.5. Or if they do not (e.g, as 1 or 0 in this example), they corresponding probabilities would approach to 0. – Emma Feb 11 '22 at 07:24
  • @Emma The probability at the mean (0.5) is also approaching zero percent. Can you fig it out? – dodo Feb 11 '22 at 16:48