4

This theorem is on Econometric Analysis (7th edition) by Greene (2012), Page 1071. It states that "If $x_i$, $i=1,2,...,n$ is a sample of observations such that $E(x_i)=\mu_i<\infty$ and $var(x_i)=\sigma_i^2<\infty$ such that $\frac{1}{n^2}\Sigma_{i}\sigma_i^2\rightarrow0$ as $n\rightarrow\infty$. Then $plim(\overline{x}_n-\overline{\mu}_n)=0$, where $\overline{x}_n$ and $\overline{\mu}_n$ are the average of $x_i$ and $\mu_i$, respectively".

I guess $plim(\overline{x}_n-\overline{\mu}_n)=0$ can be written as "$plim(\overline{x}_n)=plim(\overline{\mu}_n)$" if $plim(\overline{\mu}_n)$ exists (i.e., is a constant). Am I correct? Or under what condition, can it be written like that, since in the book, it mentions that "The Chebychev theorem does not state that $\overline{x}_n$ converges to $\overline{\mu}_n$, or even that it converges to a constant at all. That would require a precise statement about the behavior of $\overline{\mu}_n$".

Edit1: I understand this theorem. My question is more on: under what conditions can we $\overline{x}_n$ converges to $\overline{\mu}_n$ in probability since the book mentions "That would require a precise statement about the behavior of $\overline{\mu}_n$" and I am not sure what can the statement be.

As you pointed out, $\mu_i$ is not a random variable, neither does $\overline{\mu}_n$. But in the practical setting I apply this, $\mu_i$s are $i.i.d$ samples from a distribution $F$ with finite mean m and variance v.
It is like each time I draw a $\mu_i$ from $F$ and then generate $x_i$ from $N(\mu_i,\sigma_i^2)$. So in this setting, $\mu_i$ is a random variable, so does $\overline{\mu}_n$. And now $plim\overline{\mu}_n=m$. Can we say $plim(\overline{x}_n)=plim(\overline{\mu}_n)=m$ in this specific setting? We can assume independence here. Thanks.

amoeba
  • 93,463
  • 28
  • 275
  • 317
Ruth
  • 463
  • 2
  • 7
  • 1
    Your notation is a little unusual. Do you mean to say that $\bar{X}_n - \bar{\mu}_n$ converges to zero in probability? Also, are the $X_i$ assumed independent? – dsaxton Dec 17 '15 at 16:24
  • 1
    "$\text{plim}(\bar\mu_n)$" doesn't make sense because the $\mu_i$ are not random variables. A concrete example to contemplate is $X_{2^i+j} + (-1)^i\sim B(1/2)$ for all $i=0, 1, 2, \ldots$ and $j=0, \ldots, 2^i-1$. The sequence $\mu_n$ alternates between longer and longer strings of $3/2$ and $-1/2$, never reaching a limit (but not diverging either), while $\sigma_n^2=1/4$ for all $n$. – whuber Dec 17 '15 at 21:23

2 Answers2

3

Presumably we're to assume independence here, otherwise here is a simple counterexample: let $X_1$ be Bernoulli$(1/2)$ and set $X_i = X_1$ for $i > 1$.

I'm not totally sure what you're asking, but assuming independence the fact that $(\bar{X}_n - \bar{\mu}_n)$ converges to zero in probability is immediate from Chebyshev's inequality. Just note that for $\epsilon > 0$

$$ P \left ( \frac{\sum_{i=1}^{n} (X_i - \mu_i)}{n} \geq \epsilon \right ) \leq \frac{\sum_{i=1}^{n} \sigma^2_i}{n^2 \epsilon^2} $$

and the condition tells us that this bound goes to zero as $n \to \infty$. Which part of this is causing confusion?

dsaxton
  • 11,397
  • 1
  • 23
  • 45
  • 1
    +1 I think that requiring the $X_i$ to be uncorrelated will suffice. – Dilip Sarwate Dec 17 '15 at 17:45
  • "... could be relaxed quite a bit." I am not so sure. $n^{-2} \sum_{i=1}^{n} \sigma^2_i$ is exactly the variance of $\frac{\sum_{i=1}^{n} (X_i - \mu_i)}{n}$ and it needs to converge to $0$ as $n \to \infty$. – Dilip Sarwate Dec 17 '15 at 22:19
2

Consider independent random variables $X_n \colon n \geq 1$ where $X_n \sim N(n, 1)$. In your book's terminology, $\mu_n = n$ and $\sigma_n^2 = 1$. Define $S_n = \frac 1n \sum_{i=1}^n X_n$ and note that $S_n \sim N\left(\frac{n+1}{2},\frac{\sigma^2}{n}\right)$ is what your book calls $\bar{X}_n$ whose mean is $\bar{\mu}_n = \frac{1}{n}\sum_{i=1}^n \mu_i = \frac{n+1}{2}$. Hence, $$Z_n = \left(S_n - \frac{n+1}{2}\right)\sim N\left(0,\frac{\sigma^2}{n}\right).$$ Now, Chebyshev's Inequality says that \begin{align} P\{|Z_n| > \epsilon\} &= P\left\{\left|S_n - \frac{n+1}{2}\right| > \epsilon\right\}\\ &= P\left\{\left|\bar{X}_n - \bar{\mu_n}\right| > \epsilon\right\}\\ &\leq \frac{\sigma^2}{n\epsilon^2} \to 0 ~~ \text{as}~~ n \to \infty \end{align} Thus, $Z_n$ converges to $0$ in probability, but, as you correctly deduced, it cannot be said that $S_n = \bar{X}_n$ converges to $\lim_{n\to \infty} \bar{\mu}_n$ in probability because that limit does not exist; the sequence of numbers $\bar{\mu}_n$ diverges. In whuber's comment, he gives an example of a sequence of numbers $\bar{\mu}_n$ for which the limit does not exist and the sequence neither converges nor diverges. We can construct a similar example by modifying the conditions above.

Suppose instead that $X_n \sim N((-1)^nn,1)$ so that now $$S_n \sim \begin{cases}N\left(\frac{1}{2},\frac{\sigma^2}{n}\right), & n ~~\text{even},\\N\left(-\frac{1}{2}-\frac 1n,\frac{\sigma^2}{n}\right), & n ~~\text{odd}, \end{cases}$$ so that the sequence $\bar{\mu}_n$ is a sequence whose terms are alternately positive (and fixed at $\frac 12$) and negative (and approaching $-\frac 12$ as $n \to \infty$. Thus, the sequence does not diverge, nor does it approach a limit. Similarly, the sequence $\bar{X}_n$ converges to a sequence of alternate $\pm\frac 12$ values.

Dilip Sarwate
  • 41,202
  • 4
  • 94
  • 200