Given a logistic regression model:
$y \in \{0, 1\}$
$ P(y=1|x;\theta) = h_{\theta}(x) = \frac{1}{1+\exp(-\theta^T x)}$
And given the value $\theta^*$ which maximises the conditional likelihood $P(y|X; \theta)$:
It seems to me that, given a new training example $x$, I should calculate the predicted value as:
$ y^*|x; \theta^* = \textbf{1} \{\frac{1}{1+\exp(-\theta^{*T} x)} > 0.5 \} $
However a well known online ML course (page 3) purports that the prediction rule is:
$ y^*|x; \theta^* = \textbf{1} \{\theta^{*T}x > 0 \} $
These two rules don't agree on e.g. the trivial case $x \in \mathbb{R}, x =0$. Which is correct?