Skip to content

Commit d2b94ca

Browse files
clean up docs (#1614)
* fixed hparams section * docs clean up
1 parent 4755ded commit d2b94ca

File tree

4 files changed

+247
-117
lines changed

4 files changed

+247
-117
lines changed

README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,8 @@ pip install pytorch-lightning
5454
[MNIST on TPUs](https://colab.research.google.com/drive/1-_LKx4HwAxl5M6xPJmqAAu444LTDQoa3)
5555

5656
## What is it?
57-
Lightning is a way to organize your PyTorch code to decouple the science code from the engineering. It's more of a style-guide than a framework.
57+
Lightning is a way to organize your PyTorch code to decouple the science code from the engineering.
58+
It's more of a PyTorch style-guide than a framework.
5859

5960
In Lightning, you organize your code into 3 distinct categories:
6061

@@ -69,6 +70,8 @@ Here's an example of how to refactor your research code into a [LightningModule]
6970
The rest of the code is automated by the [Trainer](https://pytorch-lightning.readthedocs.io/en/latest/trainer.html)!
7071
![PT to PL](docs/source/_images/lightning_module/pt_trainer.png)
7172

73+
[READ THIS QUICK START PAGE](https://pytorch-lightning.readthedocs.io/en/latest/new-project.html)
74+
7275
## Testing Rigour
7376
All the automated code by the Trainer is [tested rigorously with every new PR](https://github.com/PyTorchLightning/pytorch-lightning/tree/master/tests).
7477

docs/source/hyperparameters.rst

Lines changed: 7 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -27,9 +27,9 @@ Argparser Best Practices
2727
^^^^^^^^^^^^^^^^^^^^^^^^
2828
It is best practice to layer your arguments in three sections.
2929

30-
1. Trainer args (gpus, num_nodes, etc...)
31-
2. Model specific arguments (layer_dim, num_layers, learning_rate, etc...)
32-
3. Program arguments (data_path, cluster_email, etc...)
30+
1. Trainer args (gpus, num_nodes, etc...)
31+
2. Model specific arguments (layer_dim, num_layers, learning_rate, etc...)
32+
3. Program arguments (data_path, cluster_email, etc...)
3333

3434
We can do this as follows. First, in your LightningModule, define the arguments
3535
specific to that module. Remember that data splits or data paths may also be specific to
@@ -84,15 +84,11 @@ Finally, make sure to start the training like so:
8484
8585
# YES
8686
model = LitModel(hparams)
87-
88-
# NO
89-
# model = LitModel(learning_rate=hparams.learning_rate, ...)
90-
91-
# YES
9287
trainer = Trainer.from_argparse_args(hparams, early_stopping_callback=...)
9388
9489
# NO
95-
trainer = Trainer(gpus=hparams.gpus, ...)
90+
# model = LitModel(learning_rate=hparams.learning_rate, ...)
91+
#trainer = Trainer(gpus=hparams.gpus, ...)
9692
9793
9894
LightiningModule hparams
@@ -144,8 +140,8 @@ Now pass in the params when you init your model
144140
The line `self.hparams = hparams` is very special. This line assigns your hparams to the LightningModule.
145141
This does two things:
146142

147-
1. It adds them automatically to tensorboard logs under the hparams tab.
148-
2. Lightning will save those hparams to the checkpoint and use them to restore the module correctly.
143+
1. It adds them automatically to tensorboard logs under the hparams tab.
144+
2. Lightning will save those hparams to the checkpoint and use them to restore the module correctly.
149145

150146
Trainer args
151147
^^^^^^^^^^^^

docs/source/introduction_guide.rst

Lines changed: 1 addition & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -604,71 +604,7 @@ Notice the epoch is MUCH faster!
604604

605605
---------
606606

607-
Hyperparameters
608-
---------------
609-
Normally, we don't hard-code the values to a model. We usually use the command line to
610-
modify the network.
611-
612-
.. code-block:: python
613-
614-
from argparse import ArgumentParser
615-
616-
parser = ArgumentParser()
617-
618-
# parametrize the network
619-
parser.add_argument('--layer_1_dim', type=int, default=128)
620-
parser.add_argument('--layer_2_dim', type=int, default=256)
621-
parser.add_argument('--batch_size', type=int, default=64)
622-
623-
args = parser.parse_args()
624-
625-
Now we can parametrize the LightningModule.
626-
627-
.. code-block:: python
628-
:emphasize-lines: 5,6,7,12,14
629-
630-
class LitMNIST(pl.LightningModule):
631-
def __init__(self, hparams):
632-
super().__init__()
633-
self.hparams = hparams
634-
635-
self.layer_1 = torch.nn.Linear(28 * 28, hparams.layer_1_dim)
636-
self.layer_2 = torch.nn.Linear(hparams.layer_1_dim, hparams.layer_2_dim)
637-
self.layer_3 = torch.nn.Linear(hparams.layer_2_dim, 10)
638-
639-
def forward(self, x):
640-
...
641-
642-
def train_dataloader(self):
643-
...
644-
return DataLoader(mnist_train, batch_size=self.hparams.batch_size)
645-
646-
def configure_optimizers(self):
647-
return Adam(self.parameters(), lr=self.hparams.learning_rate)
648-
649-
hparams = parse_args()
650-
model = LitMNIST(hparams)
651-
652-
.. note:: Bonus! if (hparams) is in your module, Lightning will save it into the checkpoint and restore your
653-
model using those hparams exactly.
654-
655-
And we can also add all the flags available in the Trainer to the Argparser.
656-
657-
.. code-block:: python
658-
659-
# add all the available Trainer options to the ArgParser
660-
parser = pl.Trainer.add_argparse_args(parser)
661-
args = parser.parse_args()
662-
663-
And now you can start your program with
664-
665-
.. code-block:: bash
666-
667-
# now you can use any trainer flag
668-
$ python main.py --num_nodes 2 --gpus 8
669-
670-
671-
For a full guide on using hyperparameters, `check out the hyperparameters docs <hyperparameters.rst>`_.
607+
.. include:: hyperparameters.rst
672608

673609
---------
674610

0 commit comments

Comments
 (0)