Skip to content
24 changes: 23 additions & 1 deletion docs/source-pytorch/common/checkpointing_basic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,13 @@ PyTorch Lightning checkpoints are fully usable in plain PyTorch.

----

.. important::

**Important Update: Deprecated Method**

Starting from PyTorch Lightning v1.0.0, the `resume_from_checkpoint` argument has been deprecated. To resume training from a checkpoint, use the `ckpt_path` argument in the `fit()` method.
Please update your code accordingly to avoid potential compatibility issues.

************************
Contents of a checkpoint
************************
Expand Down Expand Up @@ -197,16 +204,31 @@ You can disable checkpointing by passing:

----


*********************
Resume training state
*********************

If you don't just want to load weights, but instead restore the full training, do the following:

Correct usage:

.. code-block:: python

model = LitModel()
trainer = Trainer()

# automatically restores model, epoch, step, LR schedulers, etc...
trainer.fit(model, ckpt_path="some/path/to/my_checkpoint.ckpt")
trainer.fit(model, ckpt_path="path/to/your/checkpoint.ckpt")

.. warning::

The argument `resume_from_checkpoint` has been deprecated in versions of PyTorch Lightning >= 1.0.0.
To resume training from a checkpoint, use the `ckpt_path` argument in the `fit()` method instead.

Incorrect (deprecated) usage:

.. code-block:: python

trainer = Trainer(resume_from_checkpoint="path/to/your/checkpoint.ckpt")
trainer.fit(model)
Loading