7

I'm trying to implement a Wasserstein GAN according to this blog post: https://myurasov.github.io/2017/09/24/wasserstein-gan-keras.html

And it has a wasserstein loss of:

def d_loss(y_true, y_pred):
    return K.mean(y_true * y_pred)

My understanding is that y_pred converges to two values: 1 if the image is fake, and -1 if the image is real. My question is, how does this encourage the discriminator to converge? If the y_true is 1 and the y_pred is 1, then the loss is 1. It seems then that the network is encouraged to just output zero values to bring the loss to 0.

kjetil b halvorsen
  • 63,378
  • 26
  • 142
  • 467
user135237
  • 271
  • 1
  • 3
  • 1
    In a Wasserstein GAN the discriminator isn't a discriminator anymore. The network learns to minimize the amount of discrepancy between generated and real values. However i don't understand your given loss as well, even it is found through implementations. In my opinion it shuld be in an additional Layer (yp-y) + gradient penalty. Have you answered your question yet? – maniac Jun 15 '18 at 11:47

0 Answers0