This theorem is on Econometric Analysis (7th edition) by Greene (2012), Page 1071. It states that "If $x_i$, $i=1,2,...,n$ is a sample of observations such that $E(x_i)=\mu_i<\infty$ and $var(x_i)=\sigma_i^2<\infty$ such that $\frac{1}{n^2}\Sigma_{i}\sigma_i^2\rightarrow0$ as $n\rightarrow\infty$. Then $plim(\overline{x}_n-\overline{\mu}_n)=0$, where $\overline{x}_n$ and $\overline{\mu}_n$ are the average of $x_i$ and $\mu_i$, respectively".
I guess $plim(\overline{x}_n-\overline{\mu}_n)=0$ can be written as "$plim(\overline{x}_n)=plim(\overline{\mu}_n)$" if $plim(\overline{\mu}_n)$ exists (i.e., is a constant). Am I correct? Or under what condition, can it be written like that, since in the book, it mentions that "The Chebychev theorem does not state that $\overline{x}_n$ converges to $\overline{\mu}_n$, or even that it converges to a constant at all. That would require a precise statement about the behavior of $\overline{\mu}_n$".
Edit1: I understand this theorem. My question is more on: under what conditions can we $\overline{x}_n$ converges to $\overline{\mu}_n$ in probability since the book mentions "That would require a precise statement about the behavior of $\overline{\mu}_n$" and I am not sure what can the statement be.
As you pointed out, $\mu_i$ is not a random variable, neither does $\overline{\mu}_n$. But in the practical setting I apply this, $\mu_i$s are $i.i.d$ samples from a distribution $F$ with finite mean m and variance v.
It is like each time I draw a $\mu_i$ from $F$ and then generate $x_i$ from $N(\mu_i,\sigma_i^2)$. So in this setting, $\mu_i$ is a random variable, so does $\overline{\mu}_n$. And now $plim\overline{\mu}_n=m$. Can we say $plim(\overline{x}_n)=plim(\overline{\mu}_n)=m$ in this specific setting? We can assume independence here. Thanks.