-
Notifications
You must be signed in to change notification settings - Fork 264
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
training loss is 'nan' #75
Comments
When increased the chunking size from 200ms to 600ms training started correctly. Below 600ms its giving |
It looks like a gradient issue. Sometimes one can solve them just by adding gradient clipping. |
I encountered the same problem, how did you solve it in the end? |
I encountered this . See #102 |
In training i am getting only nan loss. after 40 epoch also no update.
after debugging I found training batches getting generated as I printed their tensor. but when i am printing
pout
itsnan
output:
for 1st file its showing the tensor but after its
nan
. very weird behaviour.My dataset contain 10 audio files of 10sec length for each speaker.
please help!
The text was updated successfully, but these errors were encountered: