I'm trying to implement a Wasserstein GAN according to this blog post: https://myurasov.github.io/2017/09/24/wasserstein-gan-keras.html
And it has a wasserstein loss of:
def d_loss(y_true, y_pred):
return K.mean(y_true * y_pred)
My understanding is that y_pred converges to two values: 1 if the image is fake, and -1 if the image is real. My question is, how does this encourage the discriminator to converge? If the y_true is 1 and the y_pred is 1, then the loss is 1. It seems then that the network is encouraged to just output zero values to bring the loss to 0.