3

Let $X_1,...,X_n $ be iid $U(0,\theta)$. Find the UMP to test $H_0: \theta = \theta_0$ versus $H_1: \theta=\theta_1$, for $\theta_1 < \theta_0.$ Obtain the power of the test.

My attempt:

We know that $X_{(n)}$ is sufficient for $\theta$ and its distribution is $$f(x;\theta)=\frac{nx^{n-1}}{\theta^n}I_{(0, \theta)}(x)$$

So, by the Neyman-Person lemma we must have a critical region with the form

$$\{ x; \frac{I_{(0,\theta_0)}(x)}{I_{(0,\theta_1)}(x)}\frac{\theta_1^n}{\theta_0^n} \leq c \}$$

For $0<c<1$

But I can't write it in a better form. What should I do now?

Thanks in advance!

Glen_b
  • 257,508
  • 32
  • 553
  • 939
Giiovanna
  • 1,108
  • 1
  • 8
  • 21

3 Answers3

2

The likelihood-ratio (LR) test is not terribly useful in this situation. Your test can be simplified from your specified critical region by looking at possible regions in which the maximum value can fall. From the ordering in your critical region, it is clear that the p-value function for your test is:

$$p(\boldsymbol{x}) = \begin{cases} \text{undefined} & & & \text{for } \theta_0 < x_{(n)}, \\ 1 & & & \text{for } \theta_1 < x_{(n)} \leqslant \theta_0, \\ (\theta_1 / \theta_0)^n & & & \text{for } 0 \leqslant x_{(n)} \leqslant \theta_1. \\ \end{cases}$$

(In the case where $\theta_0 < x_{(n)}$ both hypotheses are falsified by the data, and your LR statistic is undefined, leading to an undefined p-value.)

We can see that, for any significance level $\alpha < (\theta_1 / \theta_0)^n$ the likelihood-ratio test accepts the null hypothesis under all possible observed outcomes (and is trivially UMP). For any significance level $\alpha > (\theta_1 / \theta_0)^n$, the test rejects the null if and only if $x_{(n)} \leqslant \theta_1$ (and it is again trivially UMP).

The problem with the LR test in this situation is that the LR is either zero or one, and does not have any gradations inside the range $0 \leqslant x_{(n)} \leqslant \theta_1$. This leads to a test with a binary p-value.


A better test to apply here (which does not satisfy the considitions of the Neyman-Pearson lemma, but is also UMP) is to impose an additional evidentiary ordering within the range $0 \leqslant x_{(n)} \leqslant \theta_1$ so that smaller values of $x_{(n)}$ are considered to be greater evidence for the alternative hypothesis. If we add this additional ordering we obtain the smoother p-value function:

$$p(\boldsymbol{x}) = \begin{cases} \text{undefined} & & & \text{for } \theta_0 < x_{(n)}, \\ 1 & & & \text{for } \theta_1 < x_{(n)} \leqslant \theta_0, \\ (x_{(n)} / \theta_0)^n & & & \text{for } 0 \leqslant x_{(n)} \leqslant \theta_1. \\ \end{cases}$$

This latter test has the benefit of avoiding a binary p=value, while maintaining the UMP condition (again trivially). Intuitively, it involves the specification of a lower observed maximum value being more conducive to a lower uppoer bound in the sampling distribution.

Ben
  • 91,027
  • 3
  • 150
  • 376
2

UMP test is not unique here.

Since the pdf $f$ has monotone likelihood ratio (MLR) in $X_{(n)}$, by Karlin-Rubin theorem a UMP size $\alpha$ test for testing $H_0:\theta=\theta_0$ against $H_1:\theta<\theta_0$ is

$$\phi_0(x_1,\ldots,x_n)=\begin{cases}1&,\text{ if }x_{(n)}<\theta_0\alpha^{1/n} \\ 0&,\text{ otherwise }\end{cases}$$

Now whenever $\theta_1<\theta_0$, we have $$\frac{f_{\theta_1}(x_1,\ldots,x_n)}{f_{\theta_0}(x_1,\ldots,x_n)}=\begin{cases}\left(\frac{\theta_0}{\theta_1}\right)^n &,\text{ if }0<x_{(n)}\le \theta_1 \\ 0&,\text{ if }\theta_1<x_{(n)}\le \theta_0\end{cases}$$

So by NP lemma, a most powerful level $\alpha$ test $H_0$ against $H_1:\theta=\theta_1(<\theta_0)$ is of the form

$$\phi^*=\begin{cases}0 &,\text{ if }\theta_1<x_{(n)}\le \theta_0 \\\text{any value in }[0,1]&,\text{ otherwise }\end{cases}$$

such that $E_{\theta_0}\phi^*=\alpha$.

This yields a non-randomized MP test for $H_0$ versus $H_1:\theta<\theta_0$, namely

$$ \phi_1(x_1,\ldots,x_n)=\begin{cases}0&,\text{ if }\theta_1<x_{(n)}\le \theta_0 \\ 0 &,\text{ if }\theta_0\alpha^{1/n}<x_{(n)}\le \theta_1 \\ 1 &,\text{ otherwise }\end{cases} $$

The corresponding UMP test is

$$\phi_1(x_1,\ldots,x_n)=\begin{cases}0&,\text{ if }\theta_0\alpha^{1/n}<x_{(n)}\le \theta_0 \\ 1&,\text{ otherwise }\end{cases}$$

Also see Most powerful test of simple vs. simple in $\mathrm{Unif}[0, \theta]$.

StubbornAtom
  • 8,662
  • 1
  • 21
  • 67
-3

I think you might found the answer for this. But this is what i am thinking.

Define $$\frac{\theta_1^n}{\theta_0^n} = T^n$$ then, reject the null hypothesis if $T^n \leq $ c.

by taking the logarithm of both sides , you can simplify this to , reject the null hypothesis if $$log (T) \leq k$$ .

student_R123
  • 751
  • 4
  • 13
  • 1
    The value $T$ is fixed by the hypotheses, and does not depend on the data. So by this reasoning, your test does not depend on the data at all! – Ben May 12 '18 at 01:59
  • In Uniform distribution, the likelihood function does not depend on data. Here is a similar kind of that attempted , https://math.stackexchange.com/questions/1736322/uniformly-most-powerful-test-for-a-uniform-sample – student_R123 May 14 '18 at 16:57
  • 1
    The uniform in this case depends on the data, through its support. In the link you have provided, the test also depends on the data. – Ben May 14 '18 at 23:48