Skip to content

Commit c83b9e2

Browse files
authored
Merge branch 'master' into HYPERPARAMS-SAVE-#8912
2 parents aff6b1b + d9dfb2e commit c83b9e2

File tree

84 files changed

+884
-490
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

84 files changed

+884
-490
lines changed

CHANGELOG.md

Lines changed: 20 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -100,6 +100,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
100100
* Marked several methods in `PredictionLoop` as protected: `on_predict_start`, `on_predict_epoch_end`, `on_predict_end`, `on_predict_model_eval` ([#9516](https://github.com/PyTorchLightning/pytorch-lightning/pull/9516))
101101
* Marked several methods in `EvaluationLoop` as protected: `get_max_batches`, `on_evaluation_model_eval`, `on_evaluation_model_train`, `on_evaluation_start`, `on_evaluation_epoch_start`, `on_evaluation_epoch_end`, `on_evaluation_end`, `reload_evaluation_dataloaders` ([#9516](https://github.com/PyTorchLightning/pytorch-lightning/pull/9516))
102102
* Marked several methods in `EvaluationEpochLoop` as protected: `on_evaluation_batch_start`, `evaluation_step`, `evaluation_step_end` ([#9516](https://github.com/PyTorchLightning/pytorch-lightning/pull/9516))
103+
* Added `yielding_training_step` example ([#9983](https://github.com/PyTorchLightning/pytorch-lightning/pull/9983))
103104

104105

105106
- Added support for saving and loading state of multiple callbacks of the same type ([#7187](https://github.com/PyTorchLightning/pytorch-lightning/pull/7187))
@@ -205,6 +206,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
205206
* Added bfloat16 support for Lightning Trainer ([#9049](https://github.com/PyTorchLightning/pytorch-lightning/pull/9049))
206207
* Renamed `TPUHalfPrecisionPlugin` to `TPUBf16PrecisionPlugin` ([#10026](https://github.com/PyTorchLightning/pytorch-lightning/pull/10026))
207208
* Default to `precision=bf16` on CPU when `precision=16` is passed ([#10033](https://github.com/PyTorchLightning/pytorch-lightning/pull/10033))
209+
* Add support for `torch.autocast` ([#10053](https://github.com/PyTorchLightning/pytorch-lightning/pull/10053))
208210

209211

210212
- Added `kfold` example for loop customization ([#9965](https://github.com/PyTorchLightning/pytorch-lightning/pull/9965))
@@ -213,10 +215,10 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
213215
- LightningLite:
214216
* Added `PrecisionPlugin.forward_context`, making it the default implementation for all `{train,val,test,predict}_step_context()` methods ([#9988](https://github.com/PyTorchLightning/pytorch-lightning/pull/9988))
215217
* Added `DDPSpawnPlugin.spawn()` for spawning new processes of a given function ([#10018](https://github.com/PyTorchLightning/pytorch-lightning/pull/10018), [#10022](https://github.com/PyTorchLightning/pytorch-lightning/pull/10022))
216-
* Added `TrainingTypePlugin.{_setup_model, _setup_optimizer}` methods ([#9994](https://github.com/PyTorchLightning/pytorch-lightning/pull/9994))
218+
* Added `TrainingTypePlugin.{_setup_model, _setup_optimizer}` methods ([#9994](https://github.com/PyTorchLightning/pytorch-lightning/pull/9994), [#10064](https://github.com/PyTorchLightning/pytorch-lightning/pull/10064))
217219
* Implemented `DataParallelPlugin._setup_model` ([#10010](https://github.com/PyTorchLightning/pytorch-lightning/pull/10010))
218-
* Implemented `DeepSpeedPlugin._setup_models_and_optimizers` ([#10009](https://github.com/PyTorchLightning/pytorch-lightning/pull/10009))
219-
* Implemented `{DDPShardedPlugin,DDPShardedSpawnPlugin}._setup_models_and_optimizers` ([#10028](https://github.com/PyTorchLightning/pytorch-lightning/pull/10028))
220+
* Implemented `DeepSpeedPlugin._setup_model_and_optimizers` ([#10009](https://github.com/PyTorchLightning/pytorch-lightning/pull/10009), [#10064](https://github.com/PyTorchLightning/pytorch-lightning/pull/10064))
221+
* Implemented `{DDPShardedPlugin,DDPShardedSpawnPlugin}._setup_model_and_optimizers` ([#10028](https://github.com/PyTorchLightning/pytorch-lightning/pull/10028), [#10064](https://github.com/PyTorchLightning/pytorch-lightning/pull/10064))
220222
* Added optional `model` argument to the `optimizer_step` methods in accelerators and plugins ([#10023](https://github.com/PyTorchLightning/pytorch-lightning/pull/10023))
221223

222224

@@ -227,6 +229,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
227229
- Added `use_omegaconf` argument to `save_hparams_to_yaml` plugin ([#9170](https://github.com/PyTorchLightning/pytorch-lightning/pull/9170))
228230

229231

232+
- Added `ckpt_path` argument for `trainer.fit()` ([#10061](https://github.com/PyTorchLightning/pytorch-lightning/pull/10061))
233+
234+
230235
### Changed
231236

232237
- Setting `Trainer(accelerator="ddp_cpu")` now does not spawn a subprocess if `num_processes` is kept `1` along with `num_nodes > 1` ([#9603](https://github.com/PyTorchLightning/pytorch-lightning/pull/9603)).
@@ -332,13 +337,14 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
332337
- `pytorch_lightning.utilities.grads.grad_norm` now raises an exception if parameter `norm_type <= 0` ([#9765](https://github.com/PyTorchLightning/pytorch-lightning/pull/9765))
333338

334339

335-
336340
- Updated error message for interactive incompatible plugins ([#9896](https://github.com/PyTorchLightning/pytorch-lightning/pull/9896))
337341

338342

339343
- Updated several places in the loops and trainer to access `training_type_plugin` directly instead of `accelerator` ([#9901](https://github.com/PyTorchLightning/pytorch-lightning/pull/9901))
340344

341345

346+
- Disable quantization aware training observers by default during validating/testing/predicting stages ([#8540](https://github.com/PyTorchLightning/pytorch-lightning/pull/8540))
347+
342348

343349
### Deprecated
344350

@@ -413,6 +419,16 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
413419

414420
- Deprecated `GPUStatsMonitor` and `XLAStatsMonitor` in favor of `DeviceStatsMonitor` callback ([#9924](https://github.com/PyTorchLightning/pytorch-lightning/pull/9924))
415421

422+
423+
- Deprecated access to the `AcceleratorConnector.is_slurm_managing_tasks` attribute and marked it as protected ([#10101](https://github.com/PyTorchLightning/pytorch-lightning/pull/10101))
424+
425+
426+
- Deprecated access to the `AcceleratorConnector.configure_slurm_ddp` method and marked it as protected ([#10101](https://github.com/PyTorchLightning/pytorch-lightning/pull/10101))
427+
428+
429+
- Deprecated passing `resume_from_checkpoint` to the `Trainer` constructor in favor of `trainer.fit(ckpt_path=)` ([#10061](https://github.com/PyTorchLightning/pytorch-lightning/pull/10061))
430+
431+
416432
### Removed
417433

418434
- Removed deprecated `metrics` ([#8586](https://github.com/PyTorchLightning/pytorch-lightning/pull/8586/))
@@ -615,7 +631,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
615631
- Fixed `LearningRateMonitor` logging with multiple param groups optimizer with no scheduler ([#10044](https://github.com/PyTorchLightning/pytorch-lightning/pull/10044))
616632

617633

618-
619634
- Fixed undesired side effects being caused by `Trainer` patching dataloader methods on the `LightningModule` ([#9764](https://github.com/PyTorchLightning/pytorch-lightning/pull/9764))
620635

621636

docs/source/advanced/advanced_gpu.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -622,7 +622,7 @@ After training using ZeRO Stage 3, you'll notice that your checkpoints are a dir
622622
623623
.. warning::
624624

625-
This single file checkpoint does not include the optimizer/lr-scheduler states. This means we cannot restore training via the `resume_from_checkpoint` Trainer argument. Ensure to keep the sharded checkpoint directory if this is required.
625+
This single file checkpoint does not include the optimizer/lr-scheduler states. This means we cannot restore training via the ``trainer.fit(ckpt_path=)`` call. Ensure to keep the sharded checkpoint directory if this is required.
626626

627627
Custom DeepSpeed Config
628628
"""""""""""""""""""""""

docs/source/common/trainer.rst

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1349,6 +1349,10 @@ By setting to False, you have to add your own distributed sampler:
13491349
resume_from_checkpoint
13501350
^^^^^^^^^^^^^^^^^^^^^^
13511351

1352+
.. warning:: ``resume_from_checkpoint`` is deprecated in v1.5 and will be removed in v1.7.
1353+
Please pass ``trainer.fit(ckpt_path="some/path/to/my_checkpoint.ckpt")`` instead.
1354+
1355+
13521356
.. raw:: html
13531357

13541358
<video width="50%" max-width="400px" controls

docs/source/common/weights_loading.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -212,7 +212,7 @@ do the following:
212212
.. code-block:: python
213213
214214
model = LitModel()
215-
trainer = Trainer(resume_from_checkpoint="some/path/to/my_checkpoint.ckpt")
215+
trainer = Trainer()
216216
217217
# automatically restores model, epoch, step, LR schedulers, apex, etc...
218-
trainer.fit(model)
218+
trainer.fit(model, ckpt_path="some/path/to/my_checkpoint.ckpt")

docs/source/extensions/loops_advanced.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ The two hooks :class:`~pytorch_lightning.loops.base.Loop.on_save_checkpoint` and
3030
def on_load_checkpoint(self, state_dict):
3131
self.iteration = state_dict["iteration"]
3232
33-
When the Trainer is restarting from a checkpoint (e.g., through :code:`Trainer(resume_from_checkpoint=...)`), the loop exposes a boolean attribute :attr:`~pytorch_lightning.loops.base.Loop.restarting`.
33+
When the Trainer is restarting from a checkpoint (e.g., through :code:`trainer.fit(ckpt_path=...)`), the loop exposes a boolean attribute :attr:`~pytorch_lightning.loops.base.Loop.restarting`.
3434
Based around the value of this variable, the user can write the loop in such a way that it can restart from an arbitrary point given the state loaded from the checkpoint.
3535
For example, the implementation of the :meth:`~pytorch_lightning.loops.base.Loop.reset` method could look like this given our previous example:
3636

Lines changed: 168 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,168 @@
1+
# Copyright The PyTorch Lightning team.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
import inspect
15+
from functools import partial
16+
from typing import Generator
17+
18+
import torch
19+
20+
from pl_examples.domain_templates.generative_adversarial_net import GAN as GANTemplate
21+
from pl_examples.domain_templates.generative_adversarial_net import MNISTDataModule
22+
from pytorch_lightning import Trainer
23+
from pytorch_lightning.loops import OptimizerLoop
24+
from pytorch_lightning.loops.optimization.optimizer_loop import ClosureResult
25+
from pytorch_lightning.loops.utilities import _build_training_step_kwargs
26+
from pytorch_lightning.utilities.exceptions import MisconfigurationException
27+
28+
#############################################################################################
29+
# Yield Loop #
30+
# #
31+
# This example shows an implementation of a custom loop that changes how the #
32+
# `LightningModule.training_step` behaves. In particular, this custom "Yield" loop will #
33+
# enable the `training_step` to yield like a Python generator, retaining the values #
34+
# of local variables for subsequent calls. This can result in much cleaner and elegant #
35+
# code when dealing with multiple optimizers (automatic optimization). #
36+
# #
37+
# Learn more about the loop structure from the documentation: #
38+
# https://pytorch-lightning.readthedocs.io/en/latest/extensions/loops.html #
39+
#############################################################################################
40+
41+
42+
#############################################################################################
43+
# Step 1 / 3: Implement a custom OptimizerLoop #
44+
# #
45+
# The `training_step` gets called in the #
46+
# `pytorch_lightning.loops.optimization.OptimizerLoop`. To make it into a Python generator, #
47+
# we need to override the place where it gets called. #
48+
#############################################################################################
49+
50+
51+
class YieldLoop(OptimizerLoop):
52+
def __init__(self):
53+
super().__init__()
54+
self._generator = None
55+
56+
def connect(self, **kwargs):
57+
raise NotImplementedError(f"{self.__class__.__name__} does not connect any child loops.")
58+
59+
def on_run_start(self, batch, optimizers, batch_idx):
60+
super().on_run_start(batch, optimizers, batch_idx)
61+
if not inspect.isgeneratorfunction(self.trainer.lightning_module.training_step):
62+
raise MisconfigurationException("The LightingModule does not yield anything in the `training_step`.")
63+
assert self.trainer.lightning_module.automatic_optimization
64+
65+
# We request the generator once and save it for later
66+
# so we can call next() on it.
67+
self._generator = self._get_generator(batch, batch_idx, opt_idx=0)
68+
69+
def _make_step_fn(self, split_batch, batch_idx, opt_idx):
70+
return partial(self._training_step, self._generator)
71+
72+
def _get_generator(self, split_batch, batch_idx, opt_idx):
73+
step_kwargs = _build_training_step_kwargs(
74+
self.trainer.lightning_module, self.trainer.optimizers, split_batch, batch_idx, opt_idx, hiddens=None
75+
)
76+
77+
# Here we are basically calling `lightning_module.training_step()`
78+
# and this returns a generator! The `training_step` is handled by the
79+
# accelerator to enable distributed training.
80+
return self.trainer.accelerator.training_step(step_kwargs)
81+
82+
def _training_step(self, generator):
83+
# required for logging
84+
self.trainer.lightning_module._current_fx_name = "training_step"
85+
86+
# Here, instead of calling `lightning_module.training_step()`
87+
# we call next() on the generator!
88+
training_step_output = next(generator)
89+
self.trainer.accelerator.post_training_step()
90+
91+
training_step_output = self.trainer.call_hook("training_step_end", training_step_output)
92+
93+
# The closure result takes care of properly detaching the loss for logging and peforms
94+
# some additional checks that the output format is correct.
95+
result = ClosureResult.from_training_step_output(training_step_output, self.trainer.accumulate_grad_batches)
96+
return result
97+
98+
99+
#############################################################################################
100+
# Step 2 / 3: Implement a model using the new yield mechanism #
101+
# #
102+
# We can now implement a model that defines the `training_step` using "yield" statements. #
103+
# We choose a generative adversarial network (GAN) because it alternates between two #
104+
# optimizers updating the model parameters. In the first step we compute the loss of the #
105+
# first network (coincidentally also named "generator") and yield the loss. In the second #
106+
# step we compute the loss of the second network (the "discriminator") and yield again. #
107+
# The nice property of this yield approach is that we can reuse variables that we computed #
108+
# earlier. If this was a regular Lightning `training_step`, we would have to recompute the #
109+
# output of the first network. #
110+
#############################################################################################
111+
112+
113+
class GAN(GANTemplate):
114+
115+
# This training_step method is now a Python generator
116+
def training_step(self, batch, batch_idx, optimizer_idx=0) -> Generator:
117+
imgs, _ = batch
118+
z = torch.randn(imgs.shape[0], self.hparams.latent_dim)
119+
z = z.type_as(imgs)
120+
121+
# Here, we compute the generator output once and reuse it later.
122+
# It gets saved when we yield from the training_step.
123+
# The output then gets re-used again in the discriminator update.
124+
generator_output = self(z)
125+
126+
# train generator
127+
real_labels = torch.ones(imgs.size(0), 1)
128+
real_labels = real_labels.type_as(imgs)
129+
g_loss = self.adversarial_loss(self.discriminator(generator_output), real_labels)
130+
self.log("g_loss", g_loss)
131+
132+
# Yield instead of return: This makes the training_step a Python generator.
133+
# Once we call it again, it will continue the execution with the block below
134+
yield g_loss
135+
136+
# train discriminator
137+
real_labels = torch.ones(imgs.size(0), 1)
138+
real_labels = real_labels.type_as(imgs)
139+
real_loss = self.adversarial_loss(self.discriminator(imgs), real_labels)
140+
fake_labels = torch.zeros(imgs.size(0), 1)
141+
fake_labels = fake_labels.type_as(imgs)
142+
143+
# We make use again of the generator_output
144+
fake_loss = self.adversarial_loss(self.discriminator(generator_output.detach()), fake_labels)
145+
d_loss = (real_loss + fake_loss) / 2
146+
self.log("d_loss", d_loss)
147+
148+
yield d_loss
149+
150+
151+
#############################################################################################
152+
# Step 3 / 3: Connect the loop to the Trainer #
153+
# #
154+
# Finally, attach the loop to the `Trainer`. Here, we modified the `AutomaticOptimization` #
155+
# loop which is a subloop of the `TrainingBatchLoop`. We use `.connect()` to attach it. #
156+
#############################################################################################
157+
158+
if __name__ == "__main__":
159+
model = GAN()
160+
dm = MNISTDataModule()
161+
trainer = Trainer()
162+
163+
# Connect the new loop
164+
# YieldLoop now replaces the previous optimizer loop
165+
trainer.fit_loop.epoch_loop.batch_loop.connect(optimizer_loop=YieldLoop())
166+
167+
# fit() will now use the new loop!
168+
trainer.fit(model, dm)

pytorch_lightning/accelerators/accelerator.py

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -329,6 +329,7 @@ def optimizer_step(
329329
opt_idx: index of the current optimizer
330330
lambda_closure: closure calculating the loss value
331331
model: reference to the model, optionally defining optimizer step related hooks
332+
**kwargs: Any extra arguments to ``optimizer.step``
332333
"""
333334
model = model or self.lightning_module
334335
make_optimizer_step = self.precision_plugin.pre_optimizer_step(
@@ -349,9 +350,7 @@ def clip_gradients(
349350
gradient_clip_algorithm: GradClipAlgorithmType = GradClipAlgorithmType.NORM,
350351
) -> None:
351352
"""clips all the optimizer parameters to the given value."""
352-
self.precision_plugin.clip_gradients(
353-
optimizer, clip_val, gradient_clip_algorithm=gradient_clip_algorithm, model=self.model
354-
)
353+
self.precision_plugin.clip_gradients(optimizer, clip_val, gradient_clip_algorithm=gradient_clip_algorithm)
355354

356355
def setup_optimizers(self, trainer: "pl.Trainer") -> None:
357356
"""Creates optimizers and schedulers.

0 commit comments

Comments
 (0)