If you have tried training some deep learning models, then occasionally you would have come across scenarios where loss returns 'nan' and you are clueless about what to do.
In my experience so far most of the time it was one of the following two reasons for 'nan'.
- Invalid inputs - null, nan, blank, etc... in numerical fields
- Data not normalized
Please refer to the below links for other possible reasons for 'nan' loss
https://stackoverflow.com/questions/40050397/deep-learning-nan-loss-reasons
https://datascience.stackexchange.com/questions/68331/keras-sequential-model-returns-loss-nan
Comments
Post a Comment