5

I have several questions regarding the usual gaussianity (broad normality) assumptions in econometrics. Though people often check for normality (with apparently weak tests), I've seen just one example of "gaussianity" testing.

1.Is the "finite variance" assumption the same as a gaussianity assumption? Ie., is it the same to assume that the variable follows any distribution in the family of Elliptically Symmetric distributions? 2.If gaussianity is a requirement, how do you test for it?

The one example I know of doing some informal testing for gaussianity is NN Taleb's Errors, Robustness, and The Fourth Quadrant. Here's SSRN PDF version http://maint.ssrn.com/?abstract_id=1343042 and here the technical part of the paper in a friendly html format http://www.fooledbyrandomness.com/EDGE/index.html.

It that paper Taleb uses some measurements in a big number of financial time series trying to show that gaussianity is implausible.

He does so by:

  • Trying to see if the data is consistent with the central limit theorem seeing if kurtosis converges when increasing data aggregation.
  • Trying to see if the data is consistent with either a gaussian decay in the conditional expectations of the variable or a power law (I think, page 8 of the PDF).
  • Trying to see a non gaussian incidence of rare events.

Finally, questions #3 and #4:

3.Taleb performs his tests on data of a much higher frequency than what is commonly found in macroeconomics. What would the appropriate tests be for monthly data? 4.Are there other necessary conditions for the usual econometric models besides finiteness of variance not typically tested for?

Please bear in mind that I'm a graduate student taking a fairly basic time series course, this is just for intellectual curiosity.

Andre Silva
  • 3,070
  • 5
  • 28
  • 55
s_a
  • 162
  • 1
  • 7

1 Answers1

4

I'll go by parts and suggest you to not follow what you quoted in the introduction: it is wrong to prove gaussianity by tests that gives a certain value (or range of values) if the data is gaussian (if A implies B, B does not necesarly implies A).

  1. If I understand correctly, the answer is no: there are several distributions with finite variance that are non-gaussian (e.g. white noise in time series). Furthermore, the family of elliptically symmetric distributions are, by definition, of the form \begin{equation*}p(\mathbf{x})=\frac{1}{\alpha|\Sigma|^{1/2}}f(-\frac{1}{2}\mathbf{x}^T\Sigma^{-1}\mathbf{x})\end{equation*} where you can represent $\mathbf{x}$ as a random variable or vector (as you like, but I putted it sugestively as a random vector). $f(\cdot)$ in principle can be any function such that $\int p(\mathbf{x})=1$, where the special case $f(\cdot)=exp(\cdot)$ is a gaussian distribution. In general, then, the family of elliptically symmetric distributions are non-gaussian, where the only exception is the function $f(\cdot)=exp(\cdot)$.

  2. If gaussianity is a requirement, you have to test for non-gaussianity. There are several ways of testing non-gaussianity, where the most intuitive is the search for higher-order cummulants in your data, because the gaussian distribution is the only one that has a finite number of non-zero cummulants (this is a theorem known as Marcinkiewicz's theorem). However, this is not recommended because (a) it is computationally expensive and (b) you'll be never sure! However, one way of measuring (and therefore testing for) non-gaussianity that has been particularly useful in Independant Component Analysis (an application where you need to measure the degree of non-gaussianity of samples) is negentropy. For an introduction on these measures, see these notes by Hyvärinen on the subject which is an extract of the paper by Hyvärinen & Oja (2000) on Independant Component Analysis. If you are interested, search for his papers on efficient ways of calculating negentropy.

  3. It really depends on how many samples we are talking about...

  4. I didn't understand: conditions for what?

Néstor
  • 3,717
  • 26
  • 37