-
Notifications
You must be signed in to change notification settings - Fork 422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why my generator seems to cheat my discriminator by generating a same picture? #181
Comments
@tianjilong123 Hello! I have the same problem, tell me, did you manage to solve it? |
I didn't continue using this code. But I guess, maybe the size of the patch doesn't match the input images, too big or too small, which cauesed the output focus on a local feature. |
@tianjilong123 Interestingly, I now have the same assumptions about the size of the patch. I'm trying to apply the texture of real fields to a field from the simulator. For the first 50 epochs, the generator tries to preserve the structure of the image from the simulator, but then something strange happens - the generator simply recreates the original field image, ignoring the structure. |
Unfortunately, the authors stopped answering questions and problems a long time ago, so we have to search through fragments. |
@Bananaspirit Hi, did you changed the number of layers(nce_layers) or number of patches(num_patches @networks/netF)? I found that if we reduce the number of patches or number of layers too much, geneated images are tend to look like that(make square in center, tend to look same...) |
I want to ask that why my generator seems to generate a same output picture in some epoch. The loss function remains a relatively stable value? Is it a mode collapse? Is thera anyone can tell me how to handle with it. Thanks for your replies!
The text was updated successfully, but these errors were encountered: