You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# this fit call loads model weights and trainer state
35
+
# the trainer continues seamlessly from where you left off
36
+
# without having to do anything else.
37
+
trainer.fit(model)
36
38
```
37
39
40
+
The trainer restores:
41
+
- global_step
42
+
- current_epoch
43
+
- All optimizers
44
+
- All lr_schedulers
45
+
- Model weights
46
+
47
+
You can even change the logic of your model as long as the weights and "architecture" of
48
+
the system isn't different. If you add a layer, for instance, it might not work.
49
+
50
+
At a rough level, here's [what happens inside Trainer](https://github.com/williamFalcon/pytorch-lightning/blob/master/pytorch_lightning/root_module/model_saving.py#L63):
51
+
```python
52
+
53
+
self.global_step = checkpoint['global_step']
54
+
self.current_epoch = checkpoint['epoch']
55
+
56
+
# restore the optimizers
57
+
optimizer_states = checkpoint['optimizer_states']
58
+
for optimizer, opt_state inzip(self.optimizers, optimizer_states):
59
+
optimizer.load_state_dict(opt_state)
38
60
61
+
# restore the lr schedulers
62
+
lr_schedulers = checkpoint['lr_schedulers']
63
+
for scheduler, lrs_state inzip(self.lr_schedulers, lr_schedulers):
0 commit comments