6

I need to prove an induction step. $X_i$ are independently distributed with the distribution function $1-F_i=x^{-\alpha}L_{i}(x)$ where $\alpha \geq 0$ and $L_{i}(x)$ is regularly varying (If the limit $g(a)=\lim\limits_{x\rightarrow\infty}\frac{L(ax)}{L(x)}$ is finite and nonzero for $a >0$, then L is regularly varying).

$\lim\limits_{x\rightarrow \infty} \frac{P(X_1+...+X_n>x) }{P(X_1 > x)+...+P(X_n>x)} = 1$ is true.

Now we have to show:

$\lim\limits_{x\rightarrow \infty} \frac{P(X_1+...+X_{n+1}>x) }{P(X_1 > x)+...+P(X_{n+1}>x)} = 1.$

How do we show this?

Chris
  • 1,169
  • 3
  • 12
  • 16

2 Answers2

9

I have found some info on this problem. This question is about the proof of a theorem due to Feller, to be found on volume 2 of his "Introduction to Probability Theory and its Applications" (p. 278-279). Here is a restatement.

$\mathbf{Theorem.}$ Let $X_1,\dots,X_n$ be independent random variables with distribution functions satisfying $1-F_i(x)\sim x^{-\alpha}L_i(x)$, where $L_i$ is slowly varying at infinity. Then, the convolution $G_n:=F_1\star\dots\star F_n$ has a regularly varying tail such that $$1-G_n(x)\sim x^{-\alpha}(L_1(x)+\dots+L_n(x)) \, .$$

Feller proves the case with two random variables and just states that the general result follows by induction. By the way, his proof of the $n=2$ case is a gem.

So we already know from Feller that the theorem holds for two random variables. To prove the induction step, suppose that the theorem holds for $n-1$ random variables, which means that $$1-G_{n-1}(x)\sim x^{-\alpha}(L_1(x)+\dots+L_{n-1}(x)) \, .$$ Since the sum of slowly varying functions is a slowly varying function itself, we have that $X_1+\dots+X_{n-1}$ is a random variable, independent of $X_n$, whose distribution function $G_{n-1}$ satisfies the tail hypothesis of the theorem, that is, $1-G_{n-1}(x)\sim x^{-\alpha}M(x)$, where the slowly varying $M=L_1+\dots+L_{n-1}$. By the associativity of the convolution, we know that $$ G_n = F_1\star\dots\star F_{n-1}\star F_n = (F_1\star\dots\star F_{n-1})\star F_n = G_{n-1}\star F_n\, , $$ and we are back to the (already proved by Feller) case of two random variables satisfying the hypotheses of the theorem. Therefore, $$ 1 - G_n(x) \sim x^{-\alpha}(M(x)+L_n(x)) = x^{-\alpha}(L_1(x)+\dots+L_{n-1}(x)+L_n(x)) \, . $$

Hence, the tail of $G_n$ satisfies the necessary property, the theorem holds for $n$ random variables, and we are done with the induction step.

Zen
  • 21,786
  • 3
  • 72
  • 114
  • 1
    That's a good job Zen. – Michael R. Chernick May 26 '12 at 23:30
  • Tks, Michael! Check out Feller's proof of the $n=2$ case. He does it like a boss. – Zen May 26 '12 at 23:43
  • 1
    (+1) Unfortunately, I don't have Feller vol. 2 handy and am curious to see his proof. Slow variation is our friend; the simplest way I can think to make it work is to fix $\epsilon > 0$ and choose judiciously $\newcommand{\one}{\delta_1}\newcommand{\two}{\delta_2} \one = \one(\epsilon) > 0$ and $\two = \two(\epsilon) > 0$. Then, we "split" on them as follows, $$\{X_1 > (1+\one)x, X_2 > -\one x\} \cup \{-\one x < X_1 < (1+\one)x, X_2 > (1+\one)x \} \subset \{X_1 + X_2 > x\} \subset \{X_1> \two x, X_2 \leq \two x\} \cup \{X_1 \leq \two x, X_2 > \two x\} \cup \{X_1 > \two x, X_2 > \two x\} \>.$$ – cardinal May 27 '12 at 02:33
  • All the unions are disjoint (provided $\two < 1$) and by independence, several terms converge to either one or zero as $x \to \infty$. The remaining terms seem to work out by using $L(\delta x)/L(x) \to 1$. – cardinal May 27 '12 at 02:35
  • Zen and everybody... you guys are really really great. Are you aware of that? Thank you so much, I am beyond words! – Chris May 28 '12 at 01:34
0

You would use induction. Assume it is true for n and then show for n+1. Also write P{X1+X2+...+Xn +Xn=1>x] as P{X1+X2+...+Xn>x-Xn+1] which equals ∫P{X1+X2+...+Xn>x-y]f(y) dy where fis the density for Xn+1. Then try to express the ratio as a factor times the ratio in the form of the induction hypothesis for n. Then limit of product should be the product of the limits evaluate the two limits. One goes to 1 by the induction hypothesis and then the other factor should also converge to 1.

Michael R. Chernick
  • 39,640
  • 28
  • 74
  • 143
  • I need some more input here, I am not quite getting there. – Chris May 25 '12 at 02:57
  • Can't help without knowing where you are stuck. Were able to take the left hand side and factor it into two terms with one being the ratio in the asumption that the result is true for n? – Michael R. Chernick May 25 '12 at 03:10
  • $\frac{P(X_1+...+X_n>x-X_{n+1}) }{P(X_1 > x)+...+P(X_{n+1}>x)} $. The problem is the last term $P(X_{n+1}>x)$ in the denominator. How do I rewrite this so that I can use somehow the induction hypothesis? – Chris May 25 '12 at 03:28
  • multiply numerator and denominator by P(X1>x)+P(X2>x)+...+ P(Xn>x). – Michael R. Chernick May 25 '12 at 03:32
  • Yes there has to be a reason why the regular varying property is being assumed. – Michael R. Chernick May 25 '12 at 20:47
  • Yes, this is what I am trying to figure out... – Chris May 26 '12 at 03:31
  • @Chris Okay but are you expecting more help? I am not sure what else we can do short of giving you a complete solution. – Michael R. Chernick May 26 '12 at 03:51
  • Do you know how to do the complete solution? That would give me some comfort, knowing that someone knows how to do it. – Chris May 26 '12 at 04:08
  • I haven't gone through the exercise. – Michael R. Chernick May 26 '12 at 04:21
  • @Zen: Unfortunately, the hint provided in your comment is a dead end. I'm sure you see why. :) – cardinal May 26 '12 at 20:10
  • @cardinal Are you saying that the regular variation proper can't be used? Or just that it has to be appled differently? – Michael R. Chernick May 26 '12 at 20:21
  • At the very least, it should be applied somewhat differently. (Take the $X_i$ to be iid and check if the limit in @Zen's comment is true.) :) – cardinal May 26 '12 at 20:25
  • Zen, did you try checking your claim against the case where $L_i(x) = L(x)$ for all $i$? :-) – cardinal May 26 '12 at 19:57
  • Sorry to be dense but could someone tell me why the limit to be proven is false? – Michael R. Chernick May 26 '12 at 21:53
  • Michael, you're not being dense; I was likely being too obtuse. If the $X_i$ are iid, then $$\lim_{x\to\infty}\frac{\sum_{i=1}^n \mathbb P(X_i>x)}{\sum_{i=1}^{n+1} \mathbb P(X_i>x)} = \frac{n}{n+1} \>.$$ – cardinal May 26 '12 at 21:55
  • @cardinal So you are saying the hint doesn't work not the induction result. But why is the above true. Since the Li are different 1-Fi are different. – Michael R. Chernick May 26 '12 at 22:51
  • Michael, yes, as stated in my comments, I was referring to Zen's hint. Note that the statement in the question is for independent random variables. The hint was purported to provide a means for a proof. A subcase to consider is for iid random variables and for this the hint is clearly false. So, it can't possibly be the right avenue toward the general result. :) – cardinal May 26 '12 at 22:54
  • I see the ratio can't converge to 1 in general. It was actually my hint and Zen just said that if it holds it will complete the induction. – Michael R. Chernick May 26 '12 at 23:26
  • We better prove things right over here, cause Mr. Cardinal is always watching ;-) – Zen May 27 '12 at 00:28
  • @Zen I would want to do that anyway but unfortunately I am not perfect and will make mistakes occasionally. In this case I made a suggestion that I hoped would help but hadn't checked out. – Michael R. Chernick May 27 '12 at 00:33
  • We all make mistakes, Michael, no problem. I was just kidding. This proposition is interesting because it can be used to go a little beyond the central limit theorem, telling us about the behavior of the tail of the distribution of $S_n$. – Zen May 27 '12 at 00:45
  • 1
    @Zen my remark did not mean that I didn't see the humor in your statement! I got it. Cardinal is very good at checking for technical details. – Michael R. Chernick May 27 '12 at 00:54