Skip to content

Commit bc90da1

Browse files
authored
Merge branch 'master' into uv-for-pytorch-tests
2 parents f060aa2 + 04e103b commit bc90da1

File tree

2 files changed

+39
-3
lines changed

2 files changed

+39
-3
lines changed

docs/source-pytorch/common/hooks.rst

Lines changed: 20 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -83,13 +83,30 @@ with the source of each hook indicated:
8383
trainer.fit()
8484
8585
├── setup(stage="fit")
86-
│ └── [Callbacks only]
87-
88-
├── on_fit_start()
86+
│ ├── [LightningDataModule]
8987
│ ├── [Callbacks]
9088
│ ├── [LightningModule]
89+
│ ├── [LightningModule.configure_shared_model()]
90+
│ ├── [LightningModule.configure_model()]
91+
│ ├── Strategy.restore_checkpoint_before_setup
92+
│ │ ├── [LightningModule.on_load_checkpoint()]
93+
│ │ ├── [LightningModule.load_state_dict()]
94+
│ │ ├── [LightningDataModule.load_state_dict()]
95+
│ │ ├── [Callbacks.on_load_checkpoint()]
96+
│ │ └── [Callbacks.load_state_dict()]
9197
│ └── [Strategy]
9298
99+
├── on_fit_start()
100+
│ ├── [Callbacks]
101+
│ └── [LightningModule]
102+
103+
├── Strategy.restore_checkpoint_after_setup
104+
│ ├── [LightningModule.on_load_checkpoint()]
105+
│ ├── [LightningModule.load_state_dict()]
106+
│ ├── [LightningDataModule.load_state_dict()]
107+
│ ├── [Callbacks.on_load_checkpoint()]
108+
│ └── [Callbacks.load_state_dict()]
109+
93110
├── on_sanity_check_start()
94111
│ ├── [Callbacks]
95112
│ ├── [LightningModule]

docs/source-pytorch/tuning/profiler_basic.rst

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -121,3 +121,22 @@ This can be measured with the :class:`~lightning.pytorch.callbacks.device_stats_
121121

122122
CPU metrics will be tracked by default on the CPU accelerator. To enable it for other accelerators set ``DeviceStatsMonitor(cpu_stats=True)``. To disable logging
123123
CPU metrics, you can specify ``DeviceStatsMonitor(cpu_stats=False)``.
124+
125+
.. warning::
126+
127+
**Do not wrap** ``Trainer.fit()``, ``Trainer.validate()``, or other Trainer methods inside a manual
128+
``torch.profiler.profile`` context manager. This will cause unexpected crashes and cryptic errors due to
129+
incompatibility between PyTorch Profiler's context management and Lightning's internal training loop.
130+
Instead, always use the ``profiler`` argument in the ``Trainer`` constructor or the
131+
:class:`~lightning.pytorch.profilers.pytorch.PyTorchProfiler` profiler class if you want to customize the profiling.
132+
133+
Example:
134+
135+
.. code-block:: python
136+
137+
from lightning.pytorch import Trainer
138+
from lightning.pytorch.profilers import PytorchProfiler
139+
140+
trainer = Trainer(profiler="pytorch")
141+
# or
142+
trainer = Trainer(profiler=PytorchProfiler(dirpath=".", filename="perf_logs"))

0 commit comments

Comments
 (0)