7

In some sense the multivariate normal is the "nicest" distribution that we can describe using only a vector (rank one tensor) and a symmetric positive definite matrix (rank two tensor). $$\mathcal{N}(\mu_i, \Sigma_{ij}) = |2\pi\Sigma|^{-\frac{1}{2}}e^{-(x-\mu)^T\Sigma^{-1}(x-\mu)}$$

Is there a generalization to higher order tensors? I.e., is there a "nicest" distribution described by a vector, symmetric matrix, and symmetric third order tensor? $\mathcal{M}(\mu_i, \Sigma_{ij}, \Pi_{ijk})$ ?

chl
  • 50,972
  • 18
  • 205
  • 364
MRocklin
  • 369
  • 2
  • 11
  • Not quite what you're looking for, but there is a [matrix normal distribution](http://en.wikipedia.org/wiki/Matrix_normal_distribution) which arises often in multivariate analysis. – cardinal Sep 29 '11 at 16:41
  • What would the tensor represent? – whuber Sep 29 '11 at 16:52
  • @whuber I consider this part of the question. There are some nice ways to define the covariance matrix which might be extensible to higher rank however. One such definition of the covariance is the expectation of the outer product of the random vector (x-mu) with itself. – MRocklin Sep 29 '11 at 17:49
  • Let's ask the question for the one-dimensional case: is there a "nicest" distribution described by a location parameter, a variance parameter, and a skewness parameter? In other words, M(μi,Σij,Πijk) again, but with i=j=k=1? – Eric Sep 30 '11 at 00:44

1 Answers1

9

I don't think there is a positive answer to your question. The beauty of the normal distribution, univariate or multivariate, is that it is easily defined by the cumulants of order higher than two being zero. (The cumulant of order $k$ is the normalized $k$-th derivative of the characteristic function: $\kappa_k = i^{-k} \frac{\partial^k}{\partial t^k} \phi (t) |_{t=0}$.) The CLT states essentially that for all $k$, $\kappa_k = o(1)$ as $n\to\infty$ (and the rate can be established).

However, the property of zero cumulants is very fragile. Once you depart from zero third cumulant, all higher order cumulants have to be non-zero, as well: there is no distribution for which $\kappa_4=0$ if $\kappa_3\neq 0$. So for each value of skewness (which apparently is reflected in the third cumulant), there's a range of reasonable values for other cumulants, and the beauty of the distribution will be in the eyes of the beholder.

In a way, the "closest relative" of the multivariate normal distribution is the skew-normal distribution. Its density is the normal density "filtered" by a normal cdf: $f(x;\Sigma,\alpha) = 2f(x;\mu,\Sigma)\Phi(\alpha'x)$ where $f(x;\Sigma)$ is the density of a multivariate normal with mean zero vector and covariance matrix $\Sigma$, and $\Phi(z)$ is the standard normal cdf. So you don't really have a tensor here, but only a skewing vector. Nicely enough, for $\alpha=0$, you get the special case of a multivariate normal distribution.

Another way to approach the issue is from the point of view of stable laws. A stable law is a distribution such that the linear transformation of random variables having this distribution again has this distribution, possibly scaled and shifted. The normal distribution is an obvious example; and Cauchy is yet another example, although far less obvious. Stable laws are defined implicitly by their characteristic function: $\phi(t) = \exp[ i t \mu - |ct|^\alpha (1-i \beta \, {\rm sign} (t) \Phi]$. Here, $\mu$ is the shift parameter, $c$ is the scale parameter, $\alpha$ is the stability parameter, $\beta$ is the asymmetry parameter, and $\Phi=\tan(\pi\alpha/2)$ for $\alpha\neq1$, and $\Phi=-2/\pi \ln|t|$ for $\alpha=1$. ($\alpha=2, \beta=0$ gives the normal distribution; $\alpha=1, \beta=0$ gives the Cauchy distribution; moments beyond the first one exist only for $\alpha=2$.) Wikipedia provides a bunch of pictures. The multivariate extensions of stable laws do exist, too (thanks to @mpiktas for pointing them out!).

StasK
  • 29,235
  • 2
  • 80
  • 165
  • Hm, there is a wikipedia page for multivariate stable laws: http://en.wikipedia.org/wiki/Multivariate_stable_distribution, unless you have different extensions in mind. Nice answer btw, +1. – mpiktas Sep 30 '11 at 12:38
  • (+1) The typical definition of a stable law is more restrictive than you've given and requires that the form of the distribution be closed under *all* linear transformations, not just sums. In particular, the Poisson is *not* a stable law. (For the same reason that, e.g., the Gamma of fixed scale parameter is not either, even though it is "stable" under sums.) In fact, it is well-known that the **only** stable distribution with finite variance is the normal. – cardinal Sep 30 '11 at 14:06
  • @cardinal, thanks for the correction. My memory served me wrong :(. – StasK Sep 30 '11 at 14:10
  • It happens to all of us. :) – cardinal Sep 30 '11 at 14:16
  • Could you please justify a point in the above post by answering my question at ? – Arnold Neumaier Oct 29 '12 at 13:53