4

Say $X \sim \mathrm{Unif}[0, \theta]$. Denote the observations as $x_i$ $(i=1, \cdots, n)$.

Show that any test $\phi$ that satisfies the following two conditions is most powerful test of level $\alpha$ for $H_0 : \theta = \theta_0$ vs. $H_1 : \theta = \theta_1$ $(\theta_1 > \theta_0)$.

  • $\phi^{*}(x_1, \cdots, x_n) = 1 \ \ (\mathrm{if} \max\{x_1, \cdots, x_n\}) > \theta_0$

  • $\mathbb{E}_{\theta_0} \phi^{*}(X) = \alpha$


Try

By Neyman-Pearson lemma, any test that satisfies

  • $\phi^{*}(x_1, \cdots, x_n) = \begin{cases} 1 & (\mathcal{L}(\theta_1 | x_1, \cdots, x_n)/\mathcal{L}(\theta_0 | x_1, \cdots, x_n) > k) \\ \gamma & (\mathcal{L}(\theta_1 | x_1, \cdots, x_n)/\mathcal{L}(\theta_0 | x_1, \cdots, x_n) = k) \\ 0 & (\mathcal{L}(\theta_1 | x_1, \cdots, x_n)/\mathcal{L}(\theta_0 | x_1, \cdots, x_n) < k) \end{cases} $

(for some $\gamma \in [0,1]$)

  • $\mathbb{E}_{\theta_0} \phi^{*}(X) = \alpha$

is an MP test for $H_0$ vs. $H_1$.

Since

$$ \begin{aligned} \mathcal{L}(\theta_1 | x_1, \cdots, x_n)/\mathcal{L}(\theta_0 | x_1, \cdots, x_n) &= (\theta_0 / \theta_1)^n I(\max x_i \le \theta_1)/I(\max x_i \le \theta_0) \\ &= \begin{cases} (\theta_0/\theta_1)^n & (\max x_i > \theta_0) \\ \infty & (\max x_i \le \theta_0) \end{cases} \end{aligned} $$

we have

$$ \phi^{*}(x_1, \cdots, x_n) = \begin{cases}1 & (\max x_i > \theta_0) \\ \alpha & (\max x_i \le \theta_0) \end{cases} $$

is an MP test for $H_0$ vs. $H_1$.

But this does not mean ANY test that satisfies

  • $\phi^{*}(x_1, \cdots, x_n) = 1 \ \ (\mathrm{if} \max\{x_1, \cdots, x_n\}) > \theta_0$

  • $\mathbb{E}_{\theta_0} \phi^{*}(X) = \alpha$

is an MP test, but rather just provides an example of MP test.

Is there anyone to help me out?

moreblue
  • 1,089
  • 1
  • 6
  • 19
  • You might want to check the inequality signs when you say $\begin{cases} (\theta_0/\theta_1)^n & (\max x_i > \theta_0) \\ \infty & (\max x_i \le \theta_0) \end{cases}$ as I suspect you intended these the other way round. It does not affect the rest of your argument – Henry Nov 30 '18 at 01:44

1 Answers1

3

The logic is:

  • Any test which satisfies the two conditions has the same significance level $\alpha$ and power $1-\left(\frac{\theta_0}{\theta_1}\right)^n +\alpha\left(\frac{\theta_0}{\theta_1}\right)^n$ as the particular test you found; the

  • Given that your test is most powerful of all tests with significance level $\alpha$, all the other tests satisfying the two conditions are also most powerful with significance level $\alpha$

If you want to check, consider the significance level and power of these two deterministic tests

  • $\phi^{*}(x_1, \cdots, x_n) = \begin{cases} 1 & \text{ when }\max x_i > \theta_0 \sqrt[n]{1-\alpha} \\ 0 & \text{ when }\max x_i \le \theta_0 \sqrt[n]{1-\alpha} \end{cases}$
  • $\phi^{*}(x_1, \cdots, x_n) = \begin{cases} 1 & \text{ when }\max x_i > \theta_0 \\ 0 & \text{ when }\min x_i \gt \theta_0 \left(1- \sqrt[n]{1-\alpha}\right) \text{ and } \max x_i \le \theta_0 \\ 1 & \text{ when }\min x_i \le \theta_0 \left(1- \sqrt[n]{1-\alpha}\right) \end{cases}$
Henry
  • 30,848
  • 1
  • 63
  • 107
  • Could you please explain the meaning of the $\phi^*$ notation? My book doesn't use that notation, and I'm finding it extraordinarily confusing. – Adrian Keister Aug 18 '21 at 18:58
  • 1
    @AdrianKeister - I copied it from the question. I doubt that the $*$means anything important. $\phi^*$ here is just a function applied to the observations giving $1$ when the test suggests rejecting $H_0$ and $0$ when the test does not suggest that – Henry Aug 18 '21 at 23:43
  • Thanks, Henry, that's helpful. Could you please elaborate on where the $\sqrt[n]{\alpha}$ came from? I'm not following how that arises in your $\phi^*$. Thanks again! – Adrian Keister Aug 19 '21 at 17:59
  • @AdrianKeister The aim is to get the probability of all $n$ observations being in the desired region to be $1-\alpha$ (so raising the probability each observation is in a suitable interval to the power $n$, so in reverse taking the $n$th root) - but one of my expressions needed a slight correction – Henry Aug 19 '21 at 20:09
  • Thanks for your comment. Unfortunately, I'm one of those people who find it very difficult to follow outlines of proofs instead of step-by-step proofs worked out in complete detail (a good bit of the time). Do you have a book or web reference I can examine that solves this problem step-by-step? Thanks again for your time! – Adrian Keister Aug 19 '21 at 20:17
  • @AdrianKeister I am afraid I am one of those people who rarely reads books so have nothing to suggest – Henry Aug 19 '21 at 20:28
  • Another question: in the $\theta_1\theta_0$ case, don't we need $\max x_i\ge\max(\theta_0(1-\alpha)^{1/n},\theta_1)$ instead of the weaker $\max x_i\ge\max(\theta_0(1-\alpha)^{1/n},\theta_0)?$ – Adrian Keister Aug 20 '21 at 15:03
  • @AdrianKeister The question says $\theta_1 \gt \theta_0$. You would get different answers if you reversed this, and clearly it would be peculiar to ever reject $H_0$ if $\theta_1 \lt \max(x_i) \le \theta_0$. That would then affect the rejection region if you wanted to keep $\alpha$ – Henry Aug 20 '21 at 15:10
  • Hmm. I'm not sure I was clear enough in my question. I realize that $\theta_1\theta_0$ problem. I'm seeing an asymmetry in the solutions that is puzzling me. Here's the basic question: for this problem, the $\theta_1>\theta_0,$ why don't we need $\max(x_i)>\theta_1$ in order to reject $H_0?$ – Adrian Keister Aug 20 '21 at 15:35
  • NVM, I think I figured it out. It comes from the $x_i$'s all being "to the left", so to speak. The likelihood ratio is undefined in the no-man's-land between $\theta_0$ and $\theta_1.$ – Adrian Keister Aug 20 '21 at 15:46