4

Let $X_i$ for $i\in 1,\dots,n$ be a sequence of bounded (potentially) correlated random variables. No two variables $X_i$ and $X_j$ are perfectly correlated.

I am interested in $$ Y=\lim_{n\rightarrow \infty}\text{Var}\left(\max_i X_i\right) $$

Intuitively, it seems that we should have $Y=0$ or at least $$ Y \leq \max_i \text{Var}X_i $$ Is that correct? If not, is there any way to bound $Y$ from above?

user_lambda
  • 227
  • 1
  • 8
  • What are the bounds on the X variables supposed to be? – gung - Reinstate Monica Jan 19 '17 at 17:10
  • Two constants $a – user_lambda Jan 19 '17 at 19:04
  • 1
    The "at least" statement is a simple consequence of the definition of a limit of a sequence of real numbers. It could be a really terrible bound, though: $Y$ might be zero but you could start off with $X_1$ having a variance as large as $(b-a)^2/4$, which therefore would be the maximum of all the variances of the $X_i$. The real question, then, is whether you can do any better than this universal bound--but you cannot, unless you impose additional restrictions on the sequence. – whuber Jan 19 '17 at 22:24

1 Answers1

4

The best possible bound is $(b-a)^2/4$ when all the $X_i$ are bounded between $a$ and $b$.

The proof is constructive: I will exhibit a sequence $(X_i)$ meeting all the requirements of the question whose limiting variance actually equals this bound.

To help get you oriented, notice as you read the construction below that each of the $X_i$ will have a Bernoulli$(1/2 - 1/2^{i+1})$ distribution and that they will all be strongly positively correlated--but never perfectly correlated.


All variables are defined on a common probability space $(\Omega, \mathcal{S}, \mathbb{P})$. Let $\Omega=\{0,1,2,3,\ldots, n,\}$ be the natural numbers, let $\mathcal{S}$ consist of all their subsets, and let $\mathbb{P}$ be the probability measure

$$\mathbb{P}(\omega) = 2^{-\omega-1}.$$

Define a random variable $X$ to be

$$X(0) = 0;\ X(1)=X(2)=X(3)=\cdots=1.$$

For $i=1,2,3,\ldots,$ modify this variable by setting

$$X_i(\omega) = \cases{\matrix{0,&\omega=i\\X(\omega)&\text{otherwise.}}}$$

Let us compute some moments. Since every $X_i(\omega)\in\{0,1\}$, $X_i = X_i^2$. Thus

$$\mathbb{E}(X_i^2) = \mathbb{E}(X_i) =\sum_{\omega\in\Omega} \mathbb{P}(\omega)X_i(\omega) = \frac{1}{2} - \frac{1}{2^{i+1}}$$

and for $j\ne i$

$$\mathbb{E}(X_iX_j) =\sum_{\omega\in\Omega} \mathbb{P}(\omega)X_i(\omega)X_j(\omega) = \frac{1}{2} - \frac{1}{2^{i+1}} - \frac{1}{2^{j+1}}.$$

Therefore

$$\operatorname{Var}(X_i) = \mathbb{E}(X_i^2) - \mathbb{E}(X_i)^2 = \left(\frac{1}{2}-\frac{1}{2^{i+1}}\right) - \left(\frac{1}{2}-\frac{1}{2^{i+1}}\right)^2 = \frac{1}{4}-\frac{1}{2^{2i+2}}$$

and

$$\eqalign{ \operatorname{Cov}(X_i,X_j) &= \mathbb{E}(X_iX_j) - \mathbb{E}(X_i)\mathbb{E}(X_j) \\ &= \frac{1}{2} - \frac{1}{2^{i+1}} - \frac{1}{2^{j+1}} - \left(\frac{1}{2} - \frac{1}{2^{i+1}}\right)\left(\frac{1}{2} - \frac{1}{2^{j+1}}\right) \\ &= \frac{1}{4}-\frac{1}{2^{i+j+2}}. }$$

Because (when $j\ne i$) the squared covariance is not the product of the variances, $X_i$ is not perfectly correlated with $X_j$. Thus this situation complies with all the requirements of the question. We therefore obtain a bound on the limiting variance of

$$y = \lim_{i\to\infty}\operatorname{Var}(X_i) = \lim_{i\to\infty}\frac{1}{4}-\frac{1}{2^{2i+2}} = \frac{1}{4}.$$

On the other hand, no random variable bounded between $0$ and $1$ can have a variance exceeding $1/4$. That shows this is the best possible bound.

In general, if the bounds on the random variables $X_i$ are $a$ and $b$, we need only rescale this example by $(b-a)$ and shift it by $a$ to produce an example in which the limiting variance is $(b-a)^2/4$, QED.

whuber
  • 281,159
  • 54
  • 637
  • 1,101