Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unstable training #3

Open
Feynman1999 opened this issue Sep 2, 2023 · 0 comments
Open

unstable training #3

Feynman1999 opened this issue Sep 2, 2023 · 0 comments

Comments

@Feynman1999
Copy link

I am training a super-resolution network using your method, but I have noticed that even after loading a pre-trained model, the PSNR during validation suddenly drops significantly after a few hundred or a few thousand iterations (for example, from 38 to 32). The reason seems to be that the loss for one iteration suddenly increases. Have you encountered this situation before? What could be the possible reasons for it? Thank you!"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant