3

Suppose $X_i$ are i.i.d. In addition, $X_i>0$ and $E[X_i]>1$. Suppose $E[X_i]$ is known, could we find upper bound or lower bound for the following expectation: $$ E\left[\frac{\prod_{i=1}^n x_i}{\prod_{i=1}^n x_i+\prod_{i=n+1}^m x_i}\right]$$

StubbornAtom
  • 8,662
  • 1
  • 21
  • 67
Claucisco
  • 51
  • 3

2 Answers2

2

Because the expression in the question obviously lies strictly between $0$ and $1,$ those numbers work as bounds. The issue is whether they can be improved on.

Presumably you are looking for bounds that could depend on $n,$ $m,$ and the common expectation $E[X_i]=\mu \ge 1,$ but otherwise are universal in the sense they do not depend on the distribution of the $X_i.$ To be specific, what this means is that you seek a pair of functions $U$ and $L$ for which, no matter what the common distribution of the $X_i$ may be,

$$L(\mu,m,n) \le E\left[f(\mathbf{X};n)\right] \le U(\mu,m,n)$$

where $(X_1,\ldots, X_m) = \mathbf{X}$ and

$$f(\mathbf X;n) = \frac{\prod^n X_i}{\prod^n X_i + \prod_{i=n+1}^m X_i}.$$

Moreover, if possible these bounds should be "non-trivial" in the sense that if ever $L$ were made any larger or $U$ were made any smaller, then the inequalities would no longer hold for all distributions.

We may simplify the analysis a little by exploiting the (simple algebraic) relation $f(\mathbf{X};n) = 1 - f(\mathbf{X};m-n)$ implied by the iid assumption. This shows

$$L(\mu,m,n) = 1 - U(\mu,m,m-n).\tag{*}$$

It will suffice, then, to establish bounds for the cases $n \ge m-n.$ The upper bound is more interesting, so let's address it first.


Upper bound

Suppose $A,$ $B,$ and $B^\prime$ are independent random variables; that $B$ and $B^\prime$ have the same distribution; and that the exponentials of each variable all have finite expectations. Write $\alpha=E[\exp(A)].$

Note that since $B-B^\prime$ is symmetric around $0,$ its expectation either is undefined, infinite, or $0.$ To see that it must be $0,$ split $B$ into its positive and negative parts,

$$B = B_{+} - B_{-}$$

where $B_{+}= \max(0,B)$ and $B_{-}=\max(0,-B).$ Because $e^b \ge 1$ and $e^b \ge b$ for all non-negative numbers $b,$

$$\infty\gt E\left[e^B\right] = E\left[e^{B_{+}}\,e^{-B_{-}}\right] \ge E\left[e^{B_{+}}\right] \ge E[B_{+}] \ge 0$$

and similarly

$$\infty\gt E\left[e^B\right] = E\left[e^{B_{+}}\,e^{-B_{-}}\right] \ge E\left[e^{B_{-}}\right] \ge E[B_{-}] \ge 0,$$

$E[B] = E[B_{+}] - E[B_{-}]$ is both defined and finite, whence it must be zero as claimed.

Lemma $$E\left[\frac{e^{A+B^\prime}}{e^{A+B^\prime} + e^B}\right] = E\left[\frac{e^{A}}{e^{A} + e^{B-B^\prime}}\right]\le \frac{\alpha}{1+\alpha.}$$

The proof applies Jensen's Inequality twice. First, because the exponential function is convex, $E[\exp(B-B^\prime)] \ge \exp\left(E[B-B^\prime]\right) = \exp(0) = 1$ (from the analysis preceding the lemma). Therefore

$$E\left[\frac{e^{A}}{e^{A} + e^{B-B^\prime}}\mid A\right] \le \frac{e^{A}}{e^{A} + 1}.$$

Second, because the function $a \to a/(a+1)$ (for positive $a$) is concave,

$$E\left[\frac{e^{A}}{e^{A} + 1}\right] \le \frac{E\left[e^A\right]}{E\left[e^A\right]+1} = \frac{\alpha}{1+\alpha}.$$

Taking the expectations successively yields

$$E\left[\frac{e^{A+B^\prime}}{e^{A+B^\prime} + e^B}\right] = E\left[E\left[\frac{e^{A}}{e^{A} + e^{B-B^\prime}}\mid A\right] \right] \le \frac{\alpha}{1+\alpha},$$

QED.

Since $n\ge m-n$ and the $X_i$ all have positive support, the variables

$$A = \log\prod_{i=1}^{2n-m} X_i,\ B^\prime = \log\prod_{i=2n-m+1}^{m-n} X_i,\ B = \log\prod_{i=m-n+1}^m X_i$$

are all defined and satisfy the conditions of the lemma. Moreover, independence of the $X_i$ implies

$$\alpha = E\left[e^A\right] = E\left[\prod_{i=1}^{2n-m} X_i\right] = \prod_{i=1}^{2n-m} E\left[X_i\right] = \mu^{m-2n}.$$

When the $X_i$ have the constant distribution $\Pr(X_i = \mu)=1,$ the upper bound of the lemma actually is obtained: it therefore is the best possible bound.

$$\text{When } n \ge m-n,\ E\left[f(\mathbf{X};n)\right] \le \frac{\mu^{2n-m}}{1 + \mu^{2n-m}} = \frac{1}{1 + \mu^{m-2n}} = U(\mu,m,n).$$

This bound is strictly less than $1:$ it is a non-trivial result.


Lower bound

When $n \gt m-n,$ the best possible lower bound is $L(\mu,m,n)=0.$

Since $f$ must be positive, obviously this bound can never be attained. I will demonstrate the claim, then, by exhibiting distributions for the $X_i$ where an upper bound for the expectation of $f(\mathbf{X};n)$ can be made arbitrarily close to $0.$ These are the simplest possible distributions: they are supported on just two values and therefore are determined by those values and their probabilities.

Pick a number $\epsilon$ between $0$ and $\mu,$ a nonzero probability $p,$ and set $T = (\mu-(1-p)\epsilon)/p \gt \mu\gt 0.$ Let the common distribution of the $X_i$ give $T$ with probability $p$ and $\epsilon$ with probability $1-p,$ so that indeed

$$E[X_i] = \mu = pT + (1-p)\epsilon$$

as required.

Because the $X_i$ are independent, the chance that all $m$ of them equal $\epsilon$ is simply $(1-p)^m.$ When this occurs,

$$f(\mathbf (\epsilon, \ldots, \epsilon);n) = \frac{\epsilon^n}{\epsilon^n + \epsilon^{m-n}} = \frac{1}{1 + \epsilon^{2m-n}}.$$

When $n \gt m-n,$ this fraction can be made arbitrarily close to $0$ by choosing $\epsilon$ suitably small. Then

$$E[f(\mathbf{X};n)] = (1-p)^m \frac{1}{1 + \epsilon^{2m-n}} + \text{other stuff}$$

where "other stuff" consists of the other possible values of $f$ multiplied by their probabilities. Since $f$ never exceeds $1,$ we obtain an upper bound for the expectation by replacing those other possible values by $1,$ yielding

$$E[f(\mathbf{X};n)] \le (1-p)^m \frac{1}{1 + \epsilon^{2m-n}} + 1 - (1-p)^m.$$

As $p$ grows sufficiently close to $1,$ the latter term shrinks to $0,$ whence the entire right hand sum shrinks down to $0.$ Thus, the only possible lower bound for the expectation when $n \gt m-n$ is $0$, QED.

Finally, when $n = m-n,$ the symmetry relation $(*)$ implies the lower bound and the upper bound (which equals $1/(1+1)=1/2$) sum to $1,$ whence the lower bound also is $1/2.$ Although this is obvious since $f$ has the constant value $1/2$ in this case, it's nice that the results of these analyses are consistent with it!

whuber
  • 281,159
  • 54
  • 637
  • 1,101
0

Clearly, dividing the top and bottom of the fraction by the non-zero numerator implies a value for the expectation to be bounded between 0 and 1.

AJKOER
  • 1,800
  • 1
  • 9
  • 9
  • As you note at the outset, this result is trivial. The OP has commented that there exist cases where the expectation has a tighter bound than these, so evidently they are looking for an answer that provides some insight into such situations. – whuber Nov 25 '20 at 16:32