Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gradient penalty of sigmoid instead of logits. #4

Open
d-michele opened this issue Nov 7, 2019 · 3 comments
Open

gradient penalty of sigmoid instead of logits. #4

d-michele opened this issue Nov 7, 2019 · 3 comments

Comments

@d-michele
Copy link

gradients = tf.gradients(self.discriminator(interpolates, is_reuse=True), [interpolates])[0]

Hi, I would ask you an explanation of why did you calculate the gradient on the sigmoid instead of logits. Thanks!

@ChengBinJin
Copy link
Owner

@d-michele
According to the theory of WGAN and WGAN-GP, the Wasserstein distance is calculated using the output of the discriminator. And, a sigmoid function is not utilized at the end of the discriminator. Therefore, when calculating the gradient penalty from the discriminator, there is also no sigmoid function.

@opetliak
Copy link

@ChengBinJin Hi, why do you return two outputs (sigmoid and linear) in basicDiscriminator?

@ChengBinJin
Copy link
Owner

@lenpetlyak The linear value is used by calculating the Wasserstein distance, and the sigmoid value is just for debugging.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants