Jump of loss when resume training from checkpoint #7097
Unanswered
BZandi
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I noticed that the loss jumps every time I resume the training (see figure). I use 'resume_from_checkpoint' to continue the training. The parameters seem to load correctly, including the optimisation state. Does anyone know where this behaviour comes from, or is it normal? Thanks in advance for any ideas
Beta Was this translation helpful? Give feedback.
All reactions