0

I saw in a statistic book that "It can be prooved that if two normally distributed variables have covariance = 0, they are independent". How can I start this proof?

Can I say that $cov(X,Y) = E(XY) - EXEY$, like here? Why?

Oalvinegro
  • 136
  • 1
  • 9
  • 1
    The definition of independence of X and Y is that h(x,y) = f(x)*g(y), where h is the joint pdf and f and g are the marginal pdfs. Look at the pdf of the bivariate normal distribution. Notice where correlation comes up. If covariance is zero, consider what that says about correlation. – Dave May 20 '19 at 13:08

2 Answers2

3

First, they need to be jointly normal. The bivariate joint density is shown here. If you substitute $\rho=0$ (which means the correlation and so the covariance is $0$), the joint density boils down to $f_{X,Y}(x,y)=f_X(x)f_Y(y)$, which is the requirement of independence. You can also verify it with more than two dimensions because $N$ jointly normal RVs have their density defined in terms of their mean vector and covariance matrix.

gunes
  • 49,700
  • 3
  • 39
  • 75
  • The point about joint normality is crucial. Normal marginals with zero correlation (covariance) does not say anything about independence unless the joint distribution is multivariate normal! Think about combining two bivariate normal distributions with correlations with opposite signs, say 0.9 and -0.9. The scatterplot forms an X, so there is not independence, but the correlation between the marginals is zero. – Dave May 20 '19 at 13:27
-2

$cov(X,Y) = E(XY) - E(X)E(Y) = 0$ implies that:

$E(XY) = E(X)E(Y) $ which is the definition of independence between X and Y.

Kane Chua
  • 186
  • 3
  • 4
    None of this uses Normality of variables. how come? – Aksakal May 20 '19 at 12:59
  • 1
    Your definition of independence differs from the usual one. See https://en.wikipedia.org/wiki/Independence_(probability_theory)#Two_random_variables. – whuber May 20 '19 at 14:18