Work around issues in load_from_checkpoint() using trainer.predict() #13466
Replies: 1 comment
-
it doesn't load the current_epoch as explained in the issue but it does load the hyperparameters if they are saved using
because model doesn't store the current epoch, it refers the current_epoch from Trainer and since you are using the same trainer instance here, the state persists. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey all,
I recently noticed that
.load_from_checkpoint()
has some unintuitive features, for example not setting the correctmodel.current_epoch
(#12819) and needing users to manually save the hyperparameters.I also noticed that if I just use the
self.trainer.predict
api, thenself.trainer.model
is automatically loaded with the correct weights andcurrent_epoch
is also correctly the epoch of the checkpoint I saved, in contrast to when using.load_from_checkpoint()
, plus the original, correct hyperparamters:I am not sure though, if this also correctly loads optimizer states and everything else.
I am wondering if this is a valid workaround or if some parts are not loaded this way. I guess there will be some downside to this?
Environment:
Beta Was this translation helpful? Give feedback.
All reactions