7

I am looking for an intuitive reasoning behind the Marchenko Pastur law, which is described as a law of large numbers analog for random matrices. I know the law gives the probability density function of eigenvalues values of a large Wishart matrix whose dimensions tend to infinity, but I am curious of the corollaries for covariance matrices of large dimensional $n\times p$ matrices $X$ with correlation between columns.

Let $\gamma=p/n$. How does MP law provide the lower $\lambda^-$ and upper bound $\lambda^+$ of the eigenvalues for $\mathbf{C}$ derived from an $n \times p$ Wishart matrix, without knowing the variance of its random variates? Secondly, given $n$ and $p$, why is it that the eigenvalues of ${E(X^TX)}$ which exceed $\lambda^+$ are considered to be the signal eigenvalues but anything below are considered noise? What do eigenvalues to the left of the interval $(\lambda^-, \lambda^+)$ indicate?

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
michek
  • 191
  • 7
  • 1
    If you don't know the variance of the random variates, you can't use the Marchenko-Pastur distribution. You definitely need to know the random noise variance. See here for the distribution: https://en.wikipedia.org/wiki/Marchenko%E2%80%93Pastur_distribution – mortonjt Apr 16 '19 at 18:59

0 Answers0