Lets supose that a neural network is utilized to map a set of training data to a continuous interval between 0 and 1 utilizing a sigmoidal function on its output layer. Is it correct to optimize the model with a log-likelihood loss function such as:
\begin{equation} J(\theta) = -\frac{1}{N}\sum_{i=1}^{N}y_ilog(h(\theta^Tx_i))+(1-y_i)log(1-h(\theta^Tx_i)) \end{equation}
Or the loss function has to be somehow modified due to the continuous valued output?