3

Given a very large number of samples $n$ from the population, suppose that I am trying a standard hypothesis test on the means:

$$\begin{align}H_0:&\overline{x}\leq\mu_0\\ H_1:&\overline{x}>\mu_0 \end{align}$$

I am interested in the power of my test as I increase $n$. If I am given nice probability measures associated with null and alternate hypotheses (s.t. Gaussians), my task is pretty easy: I can readily characterize the power as a function of $n$ and my significance level (and show how it increases as you add more observations).

Now, suppose that the probability measure associated with $H_1$ is pathological in that it has no mean (e.g. Cauchy distribution) or the mean is infinite (e.g. appropriately-parametrized Levy distribution). The probability measure associated with $H_0$ still has a finite mean. I believe this hypothesis test can be written down as follows:

$$\begin{align}H_0:&\overline{x}=\mu_0\\ H_1:&\overline{x}~\text{infinite or undefined} \end{align}$$

Is there a way to perform such a test, supposing that my number of observations from the population is very large ($n\rightarrow\infty$)? I think there should be some way, considering that as one increases $n$, if the underlying distribution has a mean, then the average should converge to it by Law of Large Numbers. And, I am assuming that if if the underlying distribution has no mean, the average doesn't converge to anything. However, can one make a precise statement about the power of a statistical hypothesis test in terms of $n$ in that case? What would a test that separates these two hypotheses look like?

I wonder if one can prove that in such a case there exists a test that has power of 1 for a finite sample size...

EDIT: Attempted to clarify what the second hypothesis test might look like.

M.B.M.
  • 1,059
  • 8
  • 18
  • Both $H_0$ and $H_1$ are meaningless for any distribution without a finite mean. – whuber Nov 11 '11 at 19:48
  • As whuber points out, the hypotheses are meaningless when the mean is infinite, or undefined as in the case of a Cauchy random variable. How about thinking of a test for distinguishing between hypotheses $$\begin{align*}H_0: &\theta = 0\\H_1: &\theta > 0\end{align*}$$ where the observations are assumed to have density function $f(x) = \frac{1}{\pi}\frac{1}{1 + (x-\theta)^2}$, that is, a Cauchy density with _mode_ $\theta$? – Dilip Sarwate Nov 11 '11 at 20:49
  • I see @whuber's point. I'm looking for a hypothesis test that can decide whether the observations belong to a population with a mean or without it. I attempted to clarify my question to this end. Do my clarifications make sense? I don't think standard tests would apply here, so I am wondering if someone has studied this... – M.B.M. Nov 11 '11 at 21:39
  • If you draw $X$ from a Cauchy distribution, then $\bar{X}$ is also Cauchy distributed and thus $P\{|\bar{X}| < \alpha\} = \frac{2}{\pi}\arctan(\alpha)$. Use this to set a threshold to have whatever power you want for the test. – Dilip Sarwate Nov 11 '11 at 22:37
  • You still have problems. Consider the set of mixtures of a Cauchy distribution (in proportion $p$) and the standard Normal distribution (in proportion $1-p$); let $H_0:p=0$ and $H_1:p\gt 0$. Then no matter how large $n$ is and no matter how small the power $\beta$ is allowed to be, when $\beta$ is nonzero there are elements of $H_1$ that cannot be differentiated from $H_0$ with power $\beta$ or larger: they are those for which $p\lt 1-(1-\beta)^{1/n}$. This means that the sample has less than $\beta$ chance of containing any draw from the Cauchy component. – whuber Nov 11 '11 at 23:01
  • In its current form, this question appears to be a variant of http://stats.stackexchange.com/questions/2504/test-for-finite-variance. – whuber Nov 11 '11 at 23:04
  • Aha, so basically this is impossible with finite sample. It's like looking for a needle in the haystack when the needle isn't even there. @whuber: the link to the question on testing for variance is very helpful. Thanks! – M.B.M. Nov 12 '11 at 01:33

0 Answers0