A theorem stating that likelihood ratio test is the most powerful test of point null hypothesis against point alternative hypothesis. DO NOT use this tag for Neyman-Pearson approach to hypothesis testing, this tag is for the lemma only.
Questions tagged [neyman-pearson-lemma]
85 questions
11
votes
1 answer
What are the ''desirable'' statistical properties of the likelihood ratio test?
I am reading an article whose method is fully based on the likelihood ratio test. The author says that the LR test against one sided alternatives is UMP. He proceeds by claiming that
"...even when it [the LR test] can not be shown to be uniformly…

Sergey Zykov
- 341
- 1
- 12
10
votes
2 answers
Why is the Neyman-Pearson lemma a lemma and not a theorem?
This is more of a history question than a technical question.
Why is the ``Neyman-Pearson lemma'' a Lemma and not a Theorem?
link to wiki: https://en.wikipedia.org/wiki/Neyman%E2%80%93Pearson_lemma
NB: The question is not about what is a lemma and…

Tauto
- 103
- 5
9
votes
1 answer
Reproduce figure of "Computer Age Statistical Inference" from Efron and Hastie
The summarized version of my question
(26th December 2018)
I am trying to reproduce Figure 2.2 from Computer Age Statistical Inference by Efron and Hastie, but for some reason that I'm not able to understand, the numbers are not corresponding with…

Francisco Fonseca
- 131
- 6
6
votes
2 answers
Support of likelihood ratio test statistic
Say I'm testing $H_0: Y \sim \text{Exp}(1)$ against $H_1: Y \sim \text{U}(0, 1)$. I believe this gives me the following likelihood ratio test:
$$
t^*(y) = \frac{p_1(y)}{p_0(y)}
= \frac{1}{e ^ {-y}}
= e ^ {y}
$$
The problem is defining the support…

Waldir Leoncio
- 2,137
- 6
- 28
- 42
5
votes
0 answers
Neyman-Pearson lemma: critical region and hypothesis testing
Let $X_1,X_2,...,X_n$ be i.i.d r.v's with common p.d.f.
$$
\mbox f(x)=\frac{x^5e^{-x/\theta}}{5!\theta^6}
$$
where $\theta$ > 0. Show that the Neyman-Pearson lemma produces a test of $H_0: \theta=\theta_0$ against $H_1: \theta=\theta_1…

user123965
- 693
- 4
- 14
5
votes
2 answers
In plain English what is the difference between a most powerful test and a uniformly most powerful test?
I'm having trouble understanding the two concepts of a powerful test and a uniformly powerful test. I'm reading about these tests in context of the Neyman Pearson Lemma and it seems like they're virtually the same thing?

stthomas
- 71
- 1
- 3
5
votes
1 answer
Ways to find a UMP test
I'm studying for my final exams and the subject of proof will basically test hypotheses, I will try to summarize here my doubts.
For found the UMP test the ways are
1) Use Neyman–Pearson lemma where the test is of the type
…
user72621
5
votes
0 answers
Asymmetry of the Kullback-Leibler distance in hypothesis testing
My question is related to the asymmetry of the Kullback-Leibler distance. I'm using the discrete definition of the Kullback-Leibler distance, so we have:
$$
KL(p,q) = \sum_{s \in S} p(s) \log\left( \frac{p(s)}{q(s)}\right)
$$
where $p$ and $q$…

Omega
- 51
- 3
4
votes
1 answer
Most powerful test of simple vs. simple in $\mathrm{Unif}[0, \theta]$
Say $X \sim \mathrm{Unif}[0, \theta]$. Denote the observations as $x_i$ $(i=1, \cdots, n)$.
Show that any test $\phi$ that satisfies the following two conditions is most powerful test of level $\alpha$ for $H_0 : \theta = \theta_0$ vs. $H_1 :…

moreblue
- 1,089
- 1
- 6
- 19
4
votes
1 answer
Neyman-Pearson Lemma and hypothesis-testing
Consider testing $H_0:\theta=\theta_0$ vs $H_1:\theta=\theta_1$, where
the pdf or pmf corresponding to $\theta_i$ is $f(x|\theta_i)$, $i=0,1$
using a test with rejection region R that satisfies
$x\in R$ if $f(x|\theta_1)>kf(x|\theta_0)$ and…
user72621
3
votes
0 answers
Consequences of the Neyman-Pearson lemma?
Suppose $X_{1},...X_{n}$ are independently, identically distributed Bernoulli random quantities with parameter $k$. Consider the hypothesis test:$H_{0}: k = k_{0}$ vs $H_{1} : k = k_{1} \ \ $where $k_{1} \gt k_{0}$.Suppose, using the Neyman-Pearson…

Sam
- 31
- 2
3
votes
1 answer
How can I apply the Neyman-Pearson Lemma for $f(x|\theta)=\frac{1}{2\theta}\exp[-|x|/\theta]$?
Let $X_1,\cdots,X_n$ be a random sample from:
$$f(x|\theta)=\frac{1}{2\theta}\exp[-|x|/\theta]
\quad \quad \quad x \in \mathbb{R},$$
where $\theta>0$ is unknown. How can I find an MP size $\alpha$ test for $H_0:\theta=\theta_0$ versus…

Jen Snow
- 1,595
- 2
- 18
3
votes
1 answer
UMP test of size $\alpha$ for $H_0: \theta=0$ versus $H_1: \theta >0$ with $X_1,X_2,\dots,X_n \stackrel{iid}{\sim} \mathcal{U}(\theta,\theta+1)$
(Note - This is also on MSE but I thought I might have better luck here). I was posed the following question:
Let $X_1,X_2,\dots,X_n \stackrel{iid}{\sim} \mathcal{U}(\theta,\theta+1)$. Consider testing $H_0: \theta=0$ versus $H_1: \theta >0$ via…

user365239
- 378
- 1
- 8
3
votes
0 answers
What is the general methodology for constructing a UMP test for a simple hypothesis versus a composite one?
I think I understand the Neyman-Pearson lemma, but I'm really struggling to understand the reasoning with which it's used as a building block to build tests for composite hypotheses.
Take this worked example, say. At the end, they say that "the"…

Jack M
- 369
- 2
- 8
3
votes
1 answer
Eliminating a nuisance parameter in likelihood ratio test
I am having an argument with a co-author about how to eliminate a nuisance parameter in a simple likelihood ratio test and am hoping that the community helps us settle it.
Our data $\mathbf{x}$ can be described by the likelihood functions…

M.B.M.
- 1,059
- 8
- 18