2

I want to test

$$H_0: \mu \leq 0 \,\,\,\,\,\, vs \,\,\,\,\,\, H_1: \mu > 0.$$ I am using a t test, so the statistic $T$ has $\nu$ degrees of freedom which depend on the sample size.

What is the difference between testing $H_0: \mu \leq 0$ or $H_0: \mu = 0$ vs $H_1: \mu >0$?

Is it possible to test only $H_0: \mu \leq 0$?

In R command t.test(), it seems that this is equivalent to specifying alternative = "greater", but there seems to be no distinction between the two hypothesis?

Goose
  • 21
  • 2
  • 1
    Welcome to Cross Validated! I think the linked post answers your question. Note that even with the composite null $H_0: \mu \leq 0$, the simple null $H_0: \mu = 0$ is the closest to the alternative hypothesis & the tests therefore become the same in practice when $t>0$. – Scortchi - Reinstate Monica Jan 02 '19 at 17:30

1 Answers1

0

The difference is in the alternative hypothesis.

H0: μ ≤ 0 --> Ha: μ > 0

H0: μ > 0 --> Ha: μ <= 0

H0: μ = 0 --> Ha: μ > 0 or μ < 0

I.e. for the first two case, you are asking the question: "Is mean significantly bigger (μ ≤ 0) / smaller (μ > 0) than 0?", while for the third case, you're asking: "Is the mean significantly different from 0?" (either bigger or smaller)

In case you're testing H0:μ ≤ 0, you are summing up the upper tail of the null-distribution to obtain a p-value (one-sided test).

When testing H0:μ = 0, you are summing up both the upper and lower tail (two-sided test).

Scholar
  • 965
  • 4
  • 17
  • 1
    I have added more details to the question. In all the cases, I am considering the same alternative hypothesis, it is the null what changes. Apologies if this was not clear. – Goose Jan 02 '19 at 16:31
  • No you're not. The alternative hypothesis, or rather the (infinitely large) set of alternatives is defined based on the hypothesis, i.e. for H0: μ <= 0, the set of alternatives included every u1 > 0, while for H0: μ = 0, the set includes every μ1 != 0. Here, you define H0 based on these sets. For example, "greater" implies H0: μ <= 0 while "two.sided" is equivalent to "greater" OR "less", thus implying H0: μ = 0. – Scholar Jan 02 '19 at 16:37
  • This is actually a classical theory known as one-sided alternative tests: [see](http://www.stat.cmu.edu/~larry/=stat705/Lecture10.pdf), so, the alternative is not always the complement of the null. – Goose Jan 02 '19 at 16:39
  • @Goose, see my edited comment. Again, H0: u = 0 and H0: u <= 0 are very different hypothesis, since you sum up either one or both tail of the null-distribution. See the plots given here. https://en.wikipedia.org/wiki/One-_and_two-tailed_tests – Scholar Jan 02 '19 at 16:43
  • I'm not sure what you are actually arguing about, H0: u = 0 and H0: u <= 0 are inherently different. If I wasn't clear enough in my answer, please point that out. – Scholar Jan 02 '19 at 16:45
  • 1
    My question is about whether it is different to test $H_0: \mu \leq 0$ vs $H_1: \mu >0$ OR $H_0: \mu =0 $ vs $H_1: \mu >0$, as it seems that R does not distinguish between these. – Goose Jan 02 '19 at 16:47
  • Are you saying that is is not possible to test $H_0: \mu=0$ vs $H_1:\mu >0$? – Goose Jan 02 '19 at 16:52
  • Again, you can not test H0: μ=0 vs H1: μ>0 because H0: μ = 0 implies the alternatives H1: μ > 0 and H1: μ < 0. In order to test for H1: μ > 0 the null hypothesis must be H0: μ <= 0 – Scholar Jan 02 '19 at 16:52
  • Then, why does the test $H_0: \mu=0$ vs $H_1:\mu >0$ appear in many textbooks? – Goose Jan 02 '19 at 16:53
  • @Goose, yes, because of the way hypothesis testing works, you can only reject the null-hypothesis. This means you accept a set of alternatives defined by the hypothesis (i.e. the set of all possible hypothesis which are not contained in the set of null hypothesis). – Scholar Jan 02 '19 at 16:57
  • @Goose I don't know in which context this appears, but H0: u = 0 does not (only) imply H1: u > 0. EDIT: From the looks of it, the formulation of the test on page 3 in the link you quoted is likely wrong. – Scholar Jan 02 '19 at 16:58
  • If you are correct, it would imply that the book In All Likelihood (by Larry Wasserman!!!) is wrong. – Goose Jan 02 '19 at 17:02
  • Seems like it. I would assume it is a typo though. Hopefully we get some more answers here but I'm fairly confident in what I've stated. – Scholar Jan 02 '19 at 17:05
  • Well, there are dozens of references using this kind of hypotheses ([example](http://www.stats.ox.ac.uk/~filippi/Teaching/psychology_humanscience_2015/lecture8.pdf)) from reputable sources, so I am still hesitant to follow your logic. – Goose Jan 02 '19 at 17:20
  • That's very weird, because I goes against the fundamentals of hypothesis testing. If a hypothesis is not contained in the set of null-hypothesis, it's an alternative hypothesis and vice verse. Everything else doesn't make any sense. – Scholar Jan 02 '19 at 17:27
  • the event $\mu = 0 $ is contained in the event $ \mu <= 0$, so, given the way hypothesis testing works ( neyman pearson lemma ), the two hypothesis tests described by the OP are equivalent. – mlofton Jan 02 '19 at 17:27
  • @mlofton No, they are not equivalent. Let H0(u = 0) be the set of hypothesis that are defined by H0: u = 0 and H0(u <= 0) analogously. The null hypothesis are not identical since **the intersection of H0(u = 0) and H(0 <= 0) is not empty**. – Scholar Jan 02 '19 at 17:31