$Y$ is a random variable whose density $f_i(y)$ when $H_i$ is the true hypothesis
is given by
$$\begin{align}
f_0(y) &= \begin{cases}1-|y|, & -1 < y < 1,\\0,&\text{otherwise,}\end{cases}\\
f_1(y) &= \begin{cases}y, & 0 < y < 1,\\
2-y, & 1 \leq y < 2,\\0,&\text{otherwise,}\end{cases}
\end{align}$$
making the likelihood ratio
$$\Lambda(y) = \frac{f_1(y)}{f_0(y)}
= \begin{cases}0, & -1 < y < 0,\\
\frac{1-y}{y}, & 0 < y < 1,\\
\infty, & 1 < y < 2,\\
\text{undefined}, & \text{otherwise.}\end{cases}
$$
More to the point, when $H_0$ is the true hypothesis, all the observations $y_i$
necessarily are in the interval $(-1,1)$, and if at least one of them is negative,
the decision is that $H_0$ is indeed the true hypothesis (with no possibility
of a false alarm or making a Type I error and no need to think about $p$-values
or similar things dear to the heart of the hypothesis-tester).
Similarly, when $H_1$ is the true hypothesis, all the observations $y_i$
necessarily are in the interval $(0,2)$, and if at least one of them exceeds $1$,
the decision is that $H_1$ is indeed the true hypothesis (with no possibility
of a false dismissal or making a Type II error and no need to think about $p$-values). It is only in the case when all the observations $y_i$
are in the interval $(0,1)$ that we need to consider the likelihood ratio
or the log-likelihood ratio, and there exists the possibility that we might
make a Type I or Type II error. In other cases, we have an instance of
what some people call singular detection: there is no possibility that
the decision is incorrect.