Skip to content

Commit 9e844d9

Browse files
tchatonrohitgr7awaelchli
authored
Lite Docs and Example Improvements (#10303)
Co-authored-by: Rohit Gupta <[email protected]> Co-authored-by: Adrian Wälchli <[email protected]>
1 parent 1686aab commit 9e844d9

File tree

7 files changed

+80
-18
lines changed

7 files changed

+80
-18
lines changed

.azure-pipelines/gpu-tests.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -108,8 +108,6 @@ jobs:
108108
bash pl_examples/run_examples.sh --trainer.gpus=1
109109
bash pl_examples/run_examples.sh --trainer.gpus=2 --trainer.strategy=ddp
110110
bash pl_examples/run_examples.sh --trainer.gpus=2 --trainer.strategy=ddp --trainer.precision=16
111-
bash pl_examples/run_examples.sh --trainer.gpus=2 --trainer.strategy=dp
112-
bash pl_examples/run_examples.sh --trainer.gpus=2 --trainer.strategy=dp --trainer.precision=16
113111
env:
114112
PL_USE_MOCKED_MNIST: "1"
115113
displayName: 'Testing: examples'

.gitignore

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,9 @@ __pycache__/
2424
*.py[cod]
2525
*$py.class
2626
timit_data/
27+
grid_generated*
28+
grid_ori*
29+
2730

2831

2932
# C extensions

docs/source/starter/lightning_lite.rst

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -3,15 +3,14 @@ LightningLite - Stepping Stone to Lightning
33
###########################################
44

55

6+
:class:`~pytorch_lightning.lite.LightningLite` enables pure PyTorch users to scale their existing code
7+
on any kind of device while retaining full control over their own loops and optimization logic.
8+
69
.. image:: https://pl-public-data.s3.amazonaws.com/docs/static/images/lite/lightning_lite.gif
7-
:alt: Animation showing how to convert a standard training loop to a Lightning loop
8-
:width: 600px
10+
:alt: Animation showing how to convert your PyTorch code to LightningLite.
11+
:width: 500
912
:align: center
1013

11-
|
12-
13-
:class:`~pytorch_lightning.lite.LightningLite` enables pure PyTorch users to scale their existing code
14-
on any kind of device while retaining full control over their own loops and optimization logic.
1514

1615
:class:`~pytorch_lightning.lite.LightningLite` is the right tool for you if you match one of the two following descriptions:
1716

@@ -246,6 +245,9 @@ from its hundreds of features.
246245

247246
You can see our :class:`~pytorch_lightning.lite.LightningLite` as a
248247
future :class:`~pytorch_lightning.core.lightning.LightningModule` and slowly refactor your code into its API.
248+
Below, the :meth:`~pytorch_lightning.core.lightning.LightningModule.training_step`, :meth:`~pytorch_lightning.core.lightning.LightningModule.forward`,
249+
:meth:`~pytorch_lightning.core.lightning.LightningModule.configure_optimizers`, :meth:`~pytorch_lightning.core.lightning.LightningModule.train_dataloader`
250+
are being implemented.
249251

250252

251253
.. code-block:: python
@@ -300,7 +302,7 @@ future :class:`~pytorch_lightning.core.lightning.LightningModule` and slowly ref
300302
301303
302304
Finally, change the :meth:`~pytorch_lightning.lite.LightningLite.run` into a
303-
:meth:`~pytorch_lightning.core.lightning.LightningModule.__init__` and drop the inner code for setting up the components.
305+
:meth:`~pytorch_lightning.core.lightning.LightningModule.__init__` and drop the fit method.
304306

305307
.. code-block:: python
306308

pl_examples/README.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ In this folder, we have 2 simple examples:
2525

2626
- [Image Classifier](./basic_examples/backbone_image_classifier.py) (trains arbitrary datasets with arbitrary backbones).
2727
- [Image Classifier + DALI](./basic_examples/mnist_examples/image_classifier_4_dali.py) (defines the model inside the `LightningModule`).
28-
- [Autoencoder](./basic_examples/autoencoder.py) (shows how the `LightningModule` can be used as a system)
28+
- [Autoencoder](./basic_examples/autoencoder.py)
2929

3030
______________________________________________________________________
3131

@@ -37,6 +37,14 @@ for advanced use cases.
3737

3838
______________________________________________________________________
3939

40+
## Basic Examples
41+
42+
In this folder, we have 1 simple example:
43+
44+
- [Image Classifier + DALI](./integration_examples/dali_image_classifier.py) (defines the model inside the `LightningModule`).
45+
46+
______________________________________________________________________
47+
4048
## Loop examples
4149

4250
Contains implementations leveraging [loop customization](https://pytorch-lightning.readthedocs.io/en/latest/extensions/loops.html) to enhance the Trainer with new optimization routines.

pl_examples/basic_examples/README.md

Lines changed: 53 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ Trains a simple CNN over MNIST using vanilla PyTorch.
1414

1515
```bash
1616
# CPU
17-
python image_classifier_1_pytorch.py
17+
python mnist_examples/image_classifier_1_pytorch.py
1818
```
1919

2020
______________________________________________________________________
@@ -25,7 +25,7 @@ This script shows you how to scale the previous script to enable GPU and multi-G
2525

2626
```bash
2727
# CPU / multiple GPUs if available
28-
python image_classifier_2_lite.py
28+
python mnist_examples/image_classifier_2_lite.py
2929
```
3030

3131
______________________________________________________________________
@@ -36,7 +36,7 @@ This script shows you how to prepare your conversion from [LightningLite](https:
3636

3737
```bash
3838
# CPU / multiple GPUs if available
39-
python image_classifier_3_lite_to_lightning_module.py
39+
python mnist_examples/image_classifier_3_lite_to_lightning_module.py
4040
```
4141

4242
______________________________________________________________________
@@ -47,10 +47,10 @@ This script shows you the result of the conversion to the `LightningModule` and
4747

4848
```bash
4949
# CPU
50-
python image_classifier_4_lightning_module.py
50+
python mnist_examples/image_classifier_4_lightning_module.py
5151

5252
# GPUs (any number)
53-
python image_classifier_4_lightning_module.py --trainer.gpus 2
53+
python mnist_examples/image_classifier_4_lightning_module.py --trainer.gpus 2
5454
```
5555

5656
______________________________________________________________________
@@ -61,11 +61,56 @@ This script shows you how to extract the data related components into a `Lightni
6161

6262
```bash
6363
# CPU
64-
python image_classifier_5_lightning_datamodule.py
64+
python mnist_examples/image_classifier_5_lightning_datamodule.py
6565

6666
# GPUs (any number)
67-
python image_classifier_5_lightning_datamodule.py --trainer.gpus 2
67+
python mnist_examples/image_classifier_5_lightning_datamodule.py --trainer.gpus 2
6868

6969
# Distributed Data Parallel (DDP)
70-
python image_classifier_5_lightning_datamodule.py --trainer.gpus 2 --trainer.strategy 'ddp'
70+
python mnist_examples/image_classifier_5_lightning_datamodule.py --trainer.gpus 2 --trainer.strategy 'ddp'
71+
```
72+
73+
______________________________________________________________________
74+
75+
#### AutoEncoder
76+
77+
This script shows you how to implement a CNN auto-encoder.
78+
79+
```bash
80+
# CPU
81+
python autoencoder.py
82+
83+
# GPUs (any number)
84+
python autoencoder.py --trainer.gpus 2
85+
86+
# Distributed Data Parallel (DDP)
87+
python autoencoder.py --trainer.gpus 2 --trainer.strategy 'ddp'
88+
```
89+
90+
______________________________________________________________________
91+
92+
#### Backbone Image Classifier
93+
94+
This script shows you how to implement a `LightningModule` as a system.
95+
A system describes a `LightningModule` which takes a single `torch.nn.Module` which makes exporting to producion simpler.
96+
97+
```bash
98+
# CPU
99+
python backbone_image_classifier.py
100+
101+
# GPUs (any number)
102+
python backbone_image_classifier.py --trainer.gpus 2
103+
104+
# Distributed Data Parallel (DDP)
105+
python backbone_image_classifier.py --trainer.gpus 2 --trainer.strategy 'ddp'
106+
```
107+
108+
______________________________________________________________________
109+
110+
#### PyTorch Profiler
111+
112+
This script shows you how to activate the [PyTorch Profiler](https://github.com/pytorch/kineto) with Lightning.
113+
114+
```bash
115+
python profiler_example.py
71116
```

pl_examples/basic_examples/mnist_examples/image_classifier_4_lightning_module.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@
1818
import torch
1919
import torchvision.transforms as T
2020
from torch.nn import functional as F
21+
from torchmetrics import Accuracy
2122

2223
from pl_examples import cli_lightning_logo
2324
from pl_examples.basic_examples.mnist_datamodule import MNIST
@@ -31,6 +32,7 @@ def __init__(self, model=None, lr=1.0, gamma=0.7, batch_size=32):
3132
super().__init__()
3233
self.save_hyperparameters()
3334
self.model = model or Net()
35+
self.test_acc = Accuracy()
3436

3537
def forward(self, x):
3638
return self.model(x)
@@ -45,6 +47,7 @@ def test_step(self, batch, batch_idx):
4547
x, y = batch
4648
logits = self.forward(x)
4749
loss = F.nll_loss(logits, y.long())
50+
self.log("test_acc", self.test_acc(logits, y))
4851
return loss
4952

5053
def configure_optimizers(self):

pl_examples/basic_examples/mnist_examples/image_classifier_5_lightning_datamodule.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@
1818
import torch
1919
import torchvision.transforms as T
2020
from torch.nn import functional as F
21+
from torchmetrics import Accuracy
2122

2223
from pl_examples import cli_lightning_logo
2324
from pl_examples.basic_examples.mnist_datamodule import MNIST
@@ -31,6 +32,7 @@ def __init__(self, model, lr=1.0, gamma=0.7, batch_size=32):
3132
super().__init__()
3233
self.save_hyperparameters()
3334
self.model = model or Net()
35+
self.test_acc = Accuracy()
3436

3537
def forward(self, x):
3638
return self.model(x)
@@ -45,6 +47,7 @@ def test_step(self, batch, batch_idx):
4547
x, y = batch
4648
logits = self.forward(x)
4749
loss = F.nll_loss(logits, y.long())
50+
self.log("test_acc", self.test_acc(logits, y))
4851
return loss
4952

5053
def configure_optimizers(self):

0 commit comments

Comments
 (0)