In hypothesis tests, we typically have that the null/alternative hypothesis are typically defined as $H_0 : \mu = \mu_0$ and $H_1: \mu \neq \mu_0$ for a two-tailed test or $H_0 : \mu \geq \mu_0$ and $H_1: \mu < \mu_0$ for a one-tailed test, for some population parameter $\mu$. My question is if there is a difference in power for testing instead $H_0 : \mu \neq \mu_0$ and $H_1: \mu = \mu_0$ for a two-tailed test or $H_0 : \mu < \mu_0$ and $H_1: \mu \geq \mu_0$ for a one-tailed test. Essentially, I fail to see why there is always an equality at a point value within the null. I have read that it is because the conditions stated under the null are of more interest to us, or that the distribution induced under the null is easier to compute. Does anyone have any idea what is the main reason an equality sign is usually included in the null?
Asked
Active
Viewed 96 times
0
-
1$H_0 : \mu \neq \mu_0$ is not concrete enough (inequality can be any magnitude). How will you derive the test distribution for that vagueness? – ttnphns Mar 19 '16 at 08:56
-
1Your question pretty much lists both the logical reason and the practical reason (though "possible" should probably replace "easier"); whether the logical is more important than the practical depends on your viewpoint, I guess... Is it necessary for one of those reasons -- either sufficient on its own -- to have primacy? – Glen_b Mar 19 '16 at 09:05