5

I Know That , If $X$ and $Y$ are independent then $Cov(X,Y)=0$ And $ Cov(X,Y)=E(XY)−µ_Xµ_Y .$

But I Don't Know How to prove The $<$ part And How to prove the part for positive random variable.

Can Anyone help me through this?

ghostDs
  • 87
  • 4

4 Answers4

5

Via, Jensen's inequality, you'll have $$\frac{1}{E[X]}\leq E\left[\frac{1}{X}\right]$$ because $f(x)=1/x$ is a convex function for positive $x$. If you substitute this into the covariance definition, you'll reach the desired result.

gunes
  • 49,700
  • 3
  • 39
  • 75
4

Using the formula for covariance that you gave, you can reexpress the covariance as follows: $$\begin{aligned} \text{Cov}\left(X, \frac{1}{X}\right) &= E \left[ X\frac{1}{X}\right]-E[X]E\left[\frac{1}{X}\right] \\ &= 1 - E[X]E\left[\frac{1}{X}\right] \end{aligned}$$

Let $\varphi(Y) = \frac{1}{Y}$, which is a convex function for positive values of $y$ (because any line drawn between to points on the curve is above the curve for positive values). Jensen's inequality says that for convex functions, $$\varphi(E[Y]) \le E[\varphi(Y)]$$ or, equivalently, that $$\frac{1}{\varphi(E[Y])}E[\varphi(Y)] \ge 1$$

Writing $E[X]$ and $E\left[\frac{1}{X}\right]$ in terms of $\varphi(Y) = \frac{1}{Y}$, we can write

$$E[X] = \frac{1}{\varphi(E[X])} \\ E\left[\frac{1}{X}\right] = E[\varphi(X)]$$ so $$E[X]E\left[\frac{1}{X}\right] = \frac{1}{\varphi(E[X])}E[\varphi(X)] \ge 1 $$ $1$ minus something greater than $1$ is less than $0$.

Noah
  • 20,638
  • 2
  • 20
  • 58
4

One proof is to note that

\begin{align} \mathbf{Cov} (X, X^{-1}) &= \frac{1}{2}\mathbf{E} \left[ \left( X_1 - X_2 \right) \cdot \left( X_1^{-1} - X_2^{-1} \right) \right] \\ &= -\frac{1}{2}\mathbf{E} \left[ \frac{\left( X_1 - X_2 \right)^2}{X_1\cdot X_2} \right] \leq 0. \end{align}

πr8
  • 1,286
  • 5
  • 22
3

This result is not due to the positivity of $X,$ nor to the convexity of the function $x\to 1/x,$ nor to any particular property of this function apart from that it decreases. It would be less than satisfactory, then, to rely on the standard convexity inequalities such as Jensen's Inequality.

Consider this characterization of the covariance of random variables $(X,Y):$ If we let $(X_1,Y_1)$ and $(X_2,Y_2)$ be independent versions of $(X,Y),$ then

$$\operatorname{Cov}(X, Y) = E\left[(X_2-X_1)(Y_2-Y_1)\right]/2.\tag{*}$$

See https://stats.stackexchange.com/a/18200/919 for an intuitive explanation. For completeness, I include a proof below.

Let $X$ be any random variable and let $f$ be any (measurable) function on the support of $X,$ so that $Y=f(X)$ also is a random variable. Suppose $f$ does not increase. (The function $x\to 1/x$ on the positive reals has this property.) That is,

$$x_1\gt x_2\text{ implies } f(x_1) \le f(x_2)$$

unless there is no probability that $X$ will be close to one of $x_1$ or $x_2.$ This immediately implies $(X_2-X_1)(f(X_2)-f(X_1))\le 0$ almost surely, whence by $(*)$ the covariance of $X$ and $f(X)$ cannot be positive, QED.


Proof of $(*)$

Suppose $\operatorname{Cov}(X,Y)$ exists and is finite. Each of the following steps is almost trivial, beginning with linearity of expectation, remembering that $(X,Y),$ $(X_1,Y_1),$ and $(X_2,Y_2)$ all have the same distributions, and exploiting the independence of the latter two:

$$\begin{aligned} &E\left[(X_2-X_1)(Y_2-Y_1)\right] &\\ &=E[X_2Y_2] - E[X_1Y_2] - E[X_2Y_1] + E[X_1Y_1]&\text{(linearity of E)}\\ &=E[X_2Y_2] - E[X_1]E[Y_2]\ +\ E[X_1Y_1] - E[X_2][Y_1]&\text{(independence)}\\ &=E[X_2Y_2] - E[X_2]E[Y_2]\ +\ E[X_1Y_1] - E[X_1][Y_1]&\text{(equal distributions)}\\ &=\operatorname{Cov}(X_2,Y_2)+\operatorname{Cov}(X_1,Y_1)&\text{(definition of Cov)}\\ &=2\operatorname{Cov}(X,Y)&\text{(equal distributions)}, \end{aligned}$$

QED.

whuber
  • 281,159
  • 54
  • 637
  • 1,101