Questions tagged [precision-matrix]

17 questions
5
votes
1 answer

Recover full covariance matrix from covariance diagonal and precision off-diagonals

Consider an $N$-by-$N$ covariance matrix: \begin{equation} Σ = \begin{bmatrix} Σ_{11} & Σ_{12} & \dots & Σ_{1N}\\ Σ_{12} & Σ_{22} & \dots & Σ_{2N}\\ \vdots & \vdots & \vdots & \vdots\\ Σ_{1Ν} & Σ_{2N} & \dots &…
Jack G.
  • 61
  • 3
5
votes
1 answer

Can near zeros in precision matrix be treated as zeros?

A zero entry in the precision matrix (the inverse of the covariance matrix) means the corresponding variables are independent given all the other variables. For real-world data samples, when is an entry in the precision matrix small enough to be…
Ivana
  • 552
  • 2
  • 12
3
votes
1 answer

estimate precision matrix with given spatial sparsity pattern

I have a set of $n$ measurements of $p$ variables $\xi_i$. I am interested in the inverse covariance or precision matrix $P$ of the variables, but because $p \gg n$ and because of limited storage ($p$ can be on the order of several 100,000), I would…
2
votes
0 answers

Decomposition of a Gaussian Markov random field in independent subfields

A zero-mean GMRF (i.e., a multivariate normal distribution whose precision matrix is sparse) with precision $Q \in \mathbb{R}^{n \times n}$ and covariance $\Sigma = Q^{-1}$ is eigendecomposed as $Q = V \Lambda V^\top$ and $\Sigma = V \Lambda^{-1}…
2
votes
0 answers

Asymptotics of 2 x 2 precision matrix

Edited to give the answer... but I still don't understand where it came from! Suppose we have $$X_1, X_2,..., X_n \overset{i.i.d.}{\sim} N(0, \Omega^{-1})$$ where $\Omega \in \mathbb{R}^{2 \times 2}$ is the precision matrix, the mean is known to be…
user272429
2
votes
0 answers

What are good metrics for evaluating inverse covariance matrix?

For real datasets, where it's impossible to know the true inverse covariance, what are the methods of evaluating your inverse covariance estimator? Possible answers: If the number of features is long enough, randomly separate them into a train and…
Y. S.
  • 1,237
  • 3
  • 9
  • 14
2
votes
1 answer

Find an unbiased estimator of $\Sigma^{-1}$

Suppose $ X_1,\dots, X_n$ be a random sample from $N_p(\mu, \Sigma), \Sigma > 0$. Find an unbiased estimator of $\Sigma^{-1}$. I know the unbiased estimator of $\Sigma$ is $\dfrac{1}{n-1} \sum_{j=1}^n (X_j-\bar X)(X_j-\bar X)'$. But what about…
1
vote
0 answers

What is the analog of precision matrix for cross-covariance matrices?

For a covariance matrix, I am aware of the precision matrix, the covariance matrix inverse. What's the analog for that for a cross covariance matrix, i.e. $E[XY^{\top}]-E[X]E[Y^{\top}]$ for two random vectors $X$ and $Y$ of different length? Would…
crosser
  • 11
  • 1
1
vote
1 answer

graphical lasso with known precision matrix structure

I wonder if there is a way to estimate the precision matrix when certain elements are restricted to be zero? Suppose data are from $N(\mu,\Omega)$, where $\Omega=V^{-1}$, i.e. the precision matrix. Suppose we know that $\Omega_{i,j}=0$, $i\neq…
Tan
  • 1,349
  • 1
  • 13
1
vote
0 answers

What if zero mean assumption is relased in graphical LASSO?

I am working on a graphical LASSO (GLASSO) shrinkage of the variance-covariance matrix of financial log-returns data for 10 years. The objective of the graphical LASSO is: $$\ell(0,\Sigma) = {-\text{tr}(S_n\Theta) + \log\text{det }\Theta} -…
1
vote
0 answers

Problems with Graphical Lasso

I'm trying to use the Graphical Lasso algorithm (more specifically the R package glasso) to find an estimated graph representing the connections between a set of nodes by estimating a precision matrix. I have a feature matrix containing the values…
1
vote
0 answers

Scaling precision matrix

I have little knowledge about algebra, and need to rescale a precision(not variance-covariance) matrix. Suppose I have two variables, X1, X2, and their precision matrix is [5, .7, .7, .2]. Because the "variance" of the first variable(X1) should be…
0
votes
0 answers

When should we decompose the precision matrix as opposed to the covariance matrix to generate correlated variables?

We can take a covariance matrix $\Sigma$ and decompose this into a lower and upper triangular matrix $\Sigma = U^T U$ where $U$ is the Cholesky matrix. This matrix can be used to transform uncorrelated standard normal variables $X$ and $Y$ to $W$…
Chris C
  • 2,545
  • 16
  • 34
0
votes
1 answer

R package to solve Gaussian MLE under conditional independence constraints

Is there any R package or function to solve Gaussian MLE under conditional independence constraints? Suppose we have $y_i\overset{i.i.d}{\sim}\mathcal{N}(0,\Sigma_{p\times p})$, $i = 1,2,\ldots,n$. We know that $(\Sigma^{-1})_{ij} = 0$, for some…
0
votes
0 answers

Contribution to Mahalanobis Distance

I am trying to figure out how to decompose Mahalanobis Distance into its marginal contributors on a granular level. I've already split it up into marginal contribution from each variable: $$ MD_i = r_i\frac{\partial MD}{\partial r_i} =…
1
2