Skip to content

Commit b2a8ddd

Browse files
awaelchlilantiga
authored andcommitted
Consistent imports in docs for core APIs (#18869)
Co-authored-by: Sebastian Raschka <[email protected]> (cherry picked from commit f6a36cf)
1 parent cb06f09 commit b2a8ddd

24 files changed

+104
-94
lines changed

_notebooks

docs/source-pytorch/accelerators/tpu_faq.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -88,10 +88,10 @@ How to setup the debug mode for Training on TPUs?
8888

8989
.. code-block:: python
9090
91-
import lightning.pytorch as pl
91+
import lightning as L
9292
9393
my_model = MyLightningModule()
94-
trainer = pl.Trainer(accelerator="tpu", devices=8, strategy="xla_debug")
94+
trainer = L.Trainer(accelerator="tpu", devices=8, strategy="xla_debug")
9595
trainer.fit(my_model)
9696
9797
Example Metrics report:

docs/source-pytorch/accelerators/tpu_intermediate.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -44,10 +44,10 @@ To use a full TPU pod skip to the TPU pod section.
4444

4545
.. code-block:: python
4646
47-
import lightning.pytorch as pl
47+
import lightning as L
4848
4949
my_model = MyLightningModule()
50-
trainer = pl.Trainer(accelerator="tpu", devices=8)
50+
trainer = L.Trainer(accelerator="tpu", devices=8)
5151
trainer.fit(my_model)
5252
5353
That's it! Your model will train on all 8 TPU cores.
@@ -113,10 +113,10 @@ By default, TPU training will use 32-bit precision. To enable it, do
113113

114114
.. code-block:: python
115115
116-
import lightning.pytorch as pl
116+
import lightning as L
117117
118118
my_model = MyLightningModule()
119-
trainer = pl.Trainer(accelerator="tpu", precision="16-true")
119+
trainer = L.Trainer(accelerator="tpu", precision="16-true")
120120
trainer.fit(my_model)
121121
122122
Under the hood the xla library will use the `bfloat16 type <https://en.wikipedia.org/wiki/Bfloat16_floating-point_format>`_.

docs/source-pytorch/advanced/model_parallel/deepspeed.rst

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -132,12 +132,11 @@ For even more speed benefit, DeepSpeed offers an optimized CPU version of ADAM c
132132

133133
.. code-block:: python
134134
135-
import lightning.pytorch
136-
from lightning.pytorch import Trainer
135+
from lightning.pytorch import LightningModule, Trainer
137136
from deepspeed.ops.adam import DeepSpeedCPUAdam
138137
139138
140-
class MyModel(pl.LightningModule):
139+
class MyModel(LightningModule):
141140
...
142141
143142
def configure_optimizers(self):
@@ -180,7 +179,7 @@ Also please have a look at our :ref:`deepspeed-zero-stage-3-tips` which contains
180179
from deepspeed.ops.adam import FusedAdam
181180
182181
183-
class MyModel(pl.LightningModule):
182+
class MyModel(LightningModule):
184183
...
185184
186185
def configure_optimizers(self):
@@ -202,7 +201,7 @@ You can also use the Lightning Trainer to run predict or evaluate with DeepSpeed
202201
from lightning.pytorch import Trainer
203202
204203
205-
class MyModel(pl.LightningModule):
204+
class MyModel(LightningModule):
206205
...
207206
208207
@@ -228,7 +227,7 @@ This reduces the time taken to initialize very large models, as well as ensure w
228227
from deepspeed.ops.adam import FusedAdam
229228
230229
231-
class MyModel(pl.LightningModule):
230+
class MyModel(LightningModule):
232231
...
233232
234233
def configure_model(self):
@@ -367,7 +366,7 @@ This saves memory when training larger models, however requires using a checkpoi
367366
import deepspeed
368367
369368
370-
class MyModel(pl.LightningModule):
369+
class MyModel(LightningModule):
371370
...
372371
373372
def configure_model(self):

docs/source-pytorch/advanced/training_tricks.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -398,7 +398,7 @@ The :class:`~lightning.pytorch.core.datamodule.LightningDataModule` class provid
398398

399399
.. code-block:: python
400400
401-
class MNISTDataModule(pl.LightningDataModule):
401+
class MNISTDataModule(L.LightningDataModule):
402402
def prepare_data(self):
403403
MNIST(self.data_dir, download=True)
404404
@@ -421,7 +421,7 @@ For this, all data pre-loading should be done on the main process inside :meth:`
421421

422422
.. code-block:: python
423423
424-
class MNISTDataModule(pl.LightningDataModule):
424+
class MNISTDataModule(L.LightningDataModule):
425425
def __init__(self, data_dir: str):
426426
self.mnist = MNIST(data_dir, download=True, transform=T.ToTensor())
427427

docs/source-pytorch/cli/lightning_cli_advanced.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ to the class constructor. For example, your model is defined as:
164164
.. code:: python
165165
166166
# model.py
167-
class MyModel(pl.LightningModule):
167+
class MyModel(L.LightningModule):
168168
def __init__(self, criterion: torch.nn.Module):
169169
self.criterion = criterion
170170

docs/source-pytorch/common/checkpointing_advanced.rst

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -54,9 +54,9 @@ Modify a checkpoint anywhere
5454
****************************
5555
When you need to change the components of a checkpoint before saving or loading, use the :meth:`~lightning.pytorch.core.hooks.CheckpointHooks.on_save_checkpoint` and :meth:`~lightning.pytorch.core.hooks.CheckpointHooks.on_load_checkpoint` of your ``LightningModule``.
5656

57-
.. code:: python
57+
.. code-block:: python
5858
59-
class LitModel(pl.LightningModule):
59+
class LitModel(L.LightningModule):
6060
def on_save_checkpoint(self, checkpoint):
6161
checkpoint["something_cool_i_want_to_save"] = my_cool_pickable_object
6262
@@ -65,9 +65,12 @@ When you need to change the components of a checkpoint before saving or loading,
6565
6666
Use the above approach when you need to couple this behavior to your LightningModule for reproducibility reasons. Otherwise, Callbacks also have the :meth:`~lightning.pytorch.callbacks.callback.Callback.on_save_checkpoint` and :meth:`~lightning.pytorch.callbacks.callback.Callback.on_load_checkpoint` which you should use instead:
6767

68-
.. code:: python
68+
.. code-block:: python
69+
70+
import lightning as L
71+
6972
70-
class LitCallback(pl.Callback):
73+
class LitCallback(L.Callback):
7174
def on_save_checkpoint(self, checkpoint):
7275
checkpoint["something_cool_i_want_to_save"] = my_cool_pickable_object
7376

docs/source-pytorch/common/checkpointing_basic.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ In some cases, we may also pass entire PyTorch modules to the ``__init__`` metho
127127

128128
.. code-block:: python
129129
130-
class LitAutoencoder(pl.LightningModule):
130+
class LitAutoencoder(L.LightningModule):
131131
def __init__(self, encoder, decoder):
132132
...
133133
@@ -160,7 +160,7 @@ For example, let's pretend we created a LightningModule like so:
160160
...
161161
162162
163-
class Autoencoder(pl.LightningModule):
163+
class Autoencoder(L.LightningModule):
164164
def __init__(self, encoder, decoder, *args, **kwargs):
165165
...
166166

docs/source-pytorch/common/checkpointing_intermediate.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ Any value that has been logged via *self.log* in the LightningModule can be moni
2727

2828
.. code-block:: python
2929
30-
class LitModel(pl.LightningModule):
30+
class LitModel(L.LightningModule):
3131
def training_step(self, batch, batch_idx):
3232
self.log("my_metric", x)
3333

docs/source-pytorch/common/evaluation_basic.rst

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ To add a test loop, implement the **test_step** method of the LightningModule
3939

4040
.. code:: python
4141
42-
class LitAutoEncoder(pl.LightningModule):
42+
class LitAutoEncoder(L.LightningModule):
4343
def training_step(self, batch, batch_idx):
4444
...
4545
@@ -99,7 +99,7 @@ To add a validation loop, implement the **validation_step** method of the Lightn
9999

100100
.. code:: python
101101
102-
class LitAutoEncoder(pl.LightningModule):
102+
class LitAutoEncoder(L.LightningModule):
103103
def training_step(self, batch, batch_idx):
104104
...
105105
@@ -126,5 +126,5 @@ To run the validation loop, pass in the validation set to **.fit**
126126
valid_loader = DataLoader(valid_set)
127127
128128
# train with both splits
129-
trainer = Trainer()
129+
trainer = L.Trainer()
130130
trainer.fit(model, train_loader, valid_loader)

0 commit comments

Comments
 (0)