Questions tagged [matrix-inverse]

The inverse of a given square matrix, $A$, is the matrix $A^{-1}$ such that $AA^{-1}$ is the identity matrix.

In theory, matrices need to be inverted to solve systems of linear equations. Inverse matrices play an important role in theoretical analyses.

In practice, you never want to explicitly invert a matrix. It is computationally expensive and often numerically unstable. Alternatives exist for all common uses of matrix inverses, such as Gaussian elimination for solving linear equations.

150 questions
45
votes
4 answers

Why does inversion of a covariance matrix yield partial correlations between random variables?

I heard that partial correlations between random variables can be found by inverting the covariance matrix and taking appropriate cells from such resulting precision matrix (this fact is mentioned in http://en.wikipedia.org/wiki/Partial_correlation,…
25
votes
2 answers

Efficient calculation of matrix inverse in R

I need to calculate matrix inverse and have been using solve function. While it works well on small matrices, solve tends to be very slow on large matrices. I was wondering if there is any other function or combination of functions (through SVD, QR,…
jitendra
  • 468
  • 2
  • 6
  • 12
17
votes
1 answer

Relationship between Cholesky decomposition and matrix inversion?

I've been reviewing Gaussian Processes and, from what I can tell, there's some debate whether the "covariance matrix" (returned by the kernel), which needs to be inverted, should be done so through matrix inversion (expensive and numerically…
15
votes
3 answers

What is an example of perfect multicollinearity?

What is an example of perfect collinearity in terms of the design matrix $X$? I would like an example where $\hat \beta = (X'X)^{-1}X'Y$ can't be estimated because $(X'X)$ is not invertible.
13
votes
3 answers

Linear regression and non-invertibility

In linear regression there are two approaches for minimizing the cost function: The first one is using gradient descent. The second one is setting the derivative of the cost function to zero and solving the resulting equation. When the equation is…
Ahmet Yılmaz
  • 333
  • 1
  • 5
  • 12
13
votes
2 answers

What to do when sample covariance matrix is not invertible?

I am working on some clustering techniques, where for a given cluster of d-dimension vectors I assume a multivariate normal distribution and calculate the sample d-dimensional mean vector and the sample covariance matrix. Then when trying to decide…
13
votes
1 answer

Explain how `eigen` helps inverting a matrix

My question relates to a computation technique exploited in geoR:::.negloglik.GRF or geoR:::solve.geoR. In a linear mixed model setup: $$ Y=X\beta+Zb+e $$ where $\beta$ and $b$ are the fixed and random effects respectively. Also,…
12
votes
3 answers

Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression

The closed form of w in Linear regression can be written as $\hat{w}=(X^TX)^{-1}X^Ty$ How can we intuitively explain the role of $(X^TX)^{-1}$ in this equation?
Darshak
  • 185
  • 2
  • 6
11
votes
1 answer

How to calculate the inverse of sum of a Kronecker product and a diagonal matrix

I want to calculate the inverse of a matrix of the form $S = (A\otimes B+C)$, where $A$ and $B$ are symetric and invertible, $C$ is a diagonal matrix with positive elements. Basically if the dimension is high, direct calculation can be expensive.…
Eridk Poliruyt
  • 425
  • 3
  • 9
10
votes
2 answers

Lucid explanation for "numerical stability of matrix inversion" in ridge regression and its role in reducing overfit

I understand that we can employ regularization in a least squares regression problem as $$\boldsymbol{w}^* = \operatorname*{argmin}_w \left[ (\mathbf y-\mathbf{Xw})^T(\boldsymbol{y}-\mathbf{Xw}) + \lambda\|\boldsymbol{w}\|^2 \right]$$ and that this…
10
votes
1 answer

Fast computation/estimation of a low-rank linear system

Linear systems of equations are pervasive in computational statistics. One special system I have encountered (e.g., in factor analysis) is the system $$Ax=b$$ where $$A=D+ B \Omega B^T$$ Here $D$ is a $n\times n$ diagonal matrix with a strictly…
9
votes
3 answers

Numerical Instability of calculating inverse covariance matrix

I have a 65 samples of 21-dimensional data (pasted here) and I am constructing the covariance matrix from it. When computed in C++ I get the covariance matrix pasted here. And when computed in matlab from the data (as shown below) I get the…
Aly
  • 1,149
  • 2
  • 15
  • 24
9
votes
1 answer

What is the physical significance of inverse of a matrix?

I was asked this question in an interview. Though I tried my best to answer the question in whatever way I could (I was explaining in terms of mathematics), the professor looked upset. Any idea? The professor was not interested in…
Upendra01
  • 1,566
  • 4
  • 18
  • 28
8
votes
2 answers

Variance-covariance matrix of the parameter estimates wrongly calculated?

I fitted an hyperbolic distribution to my data with the hyperbFit(mydata,hessian=TRUE) command (package HyperbolicDist). The hessian looks like: > hyperbfitmymodel$hessian hyperbPi lZeta lDelta mu hyperbPi 536.61654…
8
votes
0 answers

Geometric intuition for why an outer product of two vectors makes a correlation matrix?

I understand that the outer product of two vectors, say representing two detrended time series, can represent a cross-correlation (well covariance) matrix. I also know that the inverse of a correlation matrix represents the partial correlations…
1
2 3
9 10