2

This might seem like a basic question to some but I am utterly confused by the fact that the given pdfs are not Gaussian or any other distribution commonly seen in examples.

I have two hypotheses

$H_0: y(t)=n(t)$

and

$H_1: y(t)=s(t)+n(t).$

The samples of noise $n(t)$ have the pdf $p(n)=1-|n|$ for $|n|<1$ and those of signal $s(t)$ have a value of $1$.

What would the likelihood functions be?

Nick Cox
  • 48,377
  • 8
  • 110
  • 156
Robert
  • 21
  • 2
  • The phrasing "s(t) has a pdf of 1" does not make sense. Usually, s(t) would be a parameter that would be estimated (or a function of one or more parameters). Also the pdf for n(t) is a trianglar distribution, with parameters (-1,0,1). – probabilityislogic Oct 12 '14 at 03:31
  • Corrected the phrasing of the question, hopefully it makes more sense now. – Robert Oct 12 '14 at 03:51
  • There's discussion of the behaviour of the likelihood for data from a triangular distribution [here](http://stats.stackexchange.com/a/64103/805) of which your specific null and alternative are special cases. – Glen_b Oct 12 '14 at 11:37
  • But if your problem is really as stated the log-likelihood ratio will go to either positive or negative infinity if even a single data point is below 0 or above 1. – Glen_b Oct 12 '14 at 11:42

1 Answers1

3

$Y$ is a random variable whose density $f_i(y)$ when $H_i$ is the true hypothesis is given by $$\begin{align} f_0(y) &= \begin{cases}1-|y|, & -1 < y < 1,\\0,&\text{otherwise,}\end{cases}\\ f_1(y) &= \begin{cases}y, & 0 < y < 1,\\ 2-y, & 1 \leq y < 2,\\0,&\text{otherwise,}\end{cases} \end{align}$$ making the likelihood ratio $$\Lambda(y) = \frac{f_1(y)}{f_0(y)} = \begin{cases}0, & -1 < y < 0,\\ \frac{1-y}{y}, & 0 < y < 1,\\ \infty, & 1 < y < 2,\\ \text{undefined}, & \text{otherwise.}\end{cases} $$ More to the point, when $H_0$ is the true hypothesis, all the observations $y_i$ necessarily are in the interval $(-1,1)$, and if at least one of them is negative, the decision is that $H_0$ is indeed the true hypothesis (with no possibility of a false alarm or making a Type I error and no need to think about $p$-values or similar things dear to the heart of the hypothesis-tester). Similarly, when $H_1$ is the true hypothesis, all the observations $y_i$ necessarily are in the interval $(0,2)$, and if at least one of them exceeds $1$, the decision is that $H_1$ is indeed the true hypothesis (with no possibility of a false dismissal or making a Type II error and no need to think about $p$-values). It is only in the case when all the observations $y_i$ are in the interval $(0,1)$ that we need to consider the likelihood ratio or the log-likelihood ratio, and there exists the possibility that we might make a Type I or Type II error. In other cases, we have an instance of what some people call singular detection: there is no possibility that the decision is incorrect.

Dilip Sarwate
  • 41,202
  • 4
  • 94
  • 200