1

I'm not exactly sure how to calculate the Bayesian risk Classifier $L(r^*)$ for $Y\in\{ 0,1 \}$.

For this scenario, assume:

$X\in\mathbb{X}=[0,1],Y\in\{ 0,1 \}$

$\pi_y=P(Y=y)=1/2$ for $y\in{0,1}$

Conditional distributions $[X|Y=y]$ characterised by:

$f(x|Y=0)=2-2x$ and $f(x|Y=1)=2x$.

I know the classification loss function for a generic classifier $r:\mathbb{X}\longrightarrow\{0,1\}$ is $\ell(x,y,r(x)=[[r(x)\neq y]]$ where $\ell:\mathbb{X}\times\{0,1\}\times\{0,1\}$.

I'm also aware that associated risk is $L(r)=E([[r(X)\neq Y]])$ which is equivalent to $P(r(X)\neq Y)$ and that $L(r)\geq L(r^*)$.

In the binary classification case, $L(r^*)=E(min\{\tau_1(X),1-\tau_1(X)\}=1/2-1/2E(|2\tau_1(X)-1|)$, but I'm not exactly sure how to go from that to the $Y\in\{ 0,1 \}$ case with the stated PDFs.

  • The question is unclear. What do you need to find? Do you need to show that the Bayes classifier $r^*(x)=\begin{cases}1\quad \tau_1(x)>1/2\\0\quad otherwise\end{cases}$ is optimal? – Spätzle Oct 27 '21 at 08:27

0 Answers0