2

I have the following problem : I'm calculating the sample covariance matrix in the frequency domain ( $y_{k}$ is the FFT of a time domain $k_{th}$ symbol vector signal , basically a simulated received signal) as follows:

$$\mathbf{R}=\frac{1}{N_{f}}\sum_{k=0}^{N_{f}-1}y_{k}y_{k}^{H}$$

Well , the next step on my algorithm is to solve a optimization process, as an essential part of it I need to compute the eigenvalues by doing SVD ,method of powers, etc... in MATLAB of the following equation:

$$(\mathbf{R}^{-1}\mathbf{A})$$

To not get into too many details because I believe my issue comes from a numerical problem and many insights of the actual algortihm / context aren't needed .Let's assume, $A$ is simply a predefined matrix that I compute.

The REAL ISSUE appears now because $\mathbf R$ is ill-conditioned as MATLAB tells me so. So the inverse procedure seems to be failing and the eigenvalues I'm obtaining are incredibly small due to this issue (In fact I only get 1 eigenvalue different than zero). The dimensions of $\mathbf R$ are typically large ( since they are compressed depends on the actual compression ratio I'm using but let's say $32\times 32$ for example).

One approach to solve this problem I found is to use diagonal loading as:

$$(\left(\mathbf R+\sigma\mathbf I\right)^{-1}\mathbf A)\quad\text{with}\quad\sigma > 0$$

This seems to be solving the problem, the eigenvalues are now scaled due to this background "noise". My question is how can I obtain the truly well scaled eigenvalues because , later on, in my algorithm these eigenvalues will serve as weights since I'm considering them as an actual power estimate.

Note : I've been playing with the cond() function in MATLAB for $\sigma= 0.05$ ,cond(R+sigma*I)= 2 , which is not bad I believe.

Feel free to ask more questions about the problem. But I think my question relates to a purely numerical issue involving eigenvalues and covariances matrices.

Gilles
  • 3,222
  • 3
  • 18
  • 28

1 Answers1

1

You can write

$$ R=YY^H $$ where $Y$ is a matrix of size $N\times N_f$ and $N$ is the dimension of $y_k$. $Y$ contains all the measured $y_k$ as its columns.

Then, the rank of $R$ is upper bounded by $N_f$. In particular, if $N_f<N$, $R$ will always be a singular matrix. So, if you have too few measurements, you will likely run into the problem you described.

For your second part, how do the Eigenvalues of $R$ and $R+\sigma I$ relate: Consider the eigenvalue decomposition of $R$ by $$ R=PVP^{-1}$$ where $P$ contains the eigenvectors and V is the diagonal matrix with the eigenvalues (they can be zero). Then you have

$$ R+\sigma I=P V P^{-1} + \sigma I = P(V+\sigma I)P^{-1} $$

This means that the eigenvalues are all increased by $\sigma$.

Maximilian Matthé
  • 5,982
  • 2
  • 10
  • 18
  • In the first part, I made sure Nf ( Number of frames ) is larger than the dimension of the vector . In the second part , the scaling factor of the eigenvalues as a function of sigma I was looking for is the relation between $(R)^{-1}*A$ and $(R+\sigma*I)^{-1}*A$. Sorry if I din't make that clear.Thank you very much for answering. – Ricardo García Nov 08 '16 at 22:08