1

In the exponent of the multivariate distribution, there are 2 vectors and a square matrix multiplied together to get a scalar result:

$$(\mathbf{x} - \mu)^{\text{T}}\Gamma^{-1}(\mathbf{x} - \mu)$$

where $\Gamma$ is the covariance matrix of random variables $X_1, ... X_n$, $\mathbf{x} = (x_1, ..., x_n)^{\text{T}}$ is the vector of values $X_1, ..., X_n$ takes, and $\mu = (\mu_1, ..., \mu_n)^\text{T}$ is the vector of means for $X_1, ..., X_n$.

Right-multiplying a matrix by a vector, then left-multiplying the result by the transpose of that same vector seems like something which would have a nice intuitive explanation. Is there an intuitive explanation for these operations in general, and specifically for this case in the exponent of the multivariate normal distribution?

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
kreamyyy
  • 11
  • 1

1 Answers1

1

That quadratic form in the exponent is a squared Mahalanobis distance, so a measure of the distance between $x$ and the expected value $\mu$. For details see for instance Bottom to top explanation of the Mahalanobis distance?.

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467