Skip to content

Commit 3aef67c

Browse files
captain695rohitgr7
andauthored
Documentation Fixes [skip ci] (#3955)
* Documentation Fixes Just did some scanning for errors. Fixed indentation spelling, and grammar changes. * Apply suggestions from code review Co-authored-by: Rohit Gupta <[email protected]>
1 parent 0a3aa8b commit 3aef67c

File tree

4 files changed

+19
-18
lines changed

4 files changed

+19
-18
lines changed

docs/source/bolts.rst

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -11,21 +11,21 @@ In bolts we have:
1111

1212
- A collection of pretrained state-of-the-art models.
1313
- A collection of models designed to bootstrap your research.
14-
- A collection of Callbacks, transforms, full datasets.
14+
- A collection of callbacks, transforms, full datasets.
1515
- All models work on CPUs, TPUs, GPUs and 16-bit precision.
1616

1717
-----------------
1818

1919
Quality control
2020
---------------
21-
Bolts are built-by the Lightning community and contributed to bolts.
21+
The Lightning community builds bolts and contributes them to Bolts.
2222
The lightning team guarantees that contributions are:
2323

24-
- Rigorously Tested (CPUs, GPUs, TPUs)
25-
- Rigorously Documented
26-
- Standardized via PyTorch Lightning
27-
- Optimized for speed
28-
- Checked for correctness
24+
- Rigorously Tested (CPUs, GPUs, TPUs).
25+
- Rigorously Documented.
26+
- Standardized via PyTorch Lightning.
27+
- Optimized for speed.
28+
- Checked for correctness.
2929

3030
---------
3131

docs/source/early_stopping.rst

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ To enable it:
5252
)
5353
trainer = Trainer(callbacks=[early_stop_callback])
5454
55-
In case you need early stopping in a different part of training, subclass EarlyStopping
55+
In case you need early stopping in a different part of training, subclass :class:`~pytorch_lightning.callbacks.early_stopping.EarlyStopping`
5656
and change where it is called:
5757

5858
.. testcode::
@@ -68,10 +68,11 @@ and change where it is called:
6868
self._run_early_stopping_check(trainer, pl_module)
6969

7070
.. note::
71-
The EarlyStopping callback runs at the end of every validation epoch,
71+
The :class:`~pytorch_lightning.callbacks.early_stopping.EarlyStopping` callback runs
72+
at the end of every validation epoch,
7273
which, under the default configuration, happen after every training epoch.
7374
However, the frequency of validation can be modified by setting various parameters
74-
on the :class:`~pytorch_lightning.trainer.trainer.Trainer`,
75+
in the :class:`~pytorch_lightning.trainer.trainer.Trainer`,
7576
for example :paramref:`~pytorch_lightning.trainer.trainer.Trainer.check_val_every_n_epoch`
7677
and :paramref:`~pytorch_lightning.trainer.trainer.Trainer.val_check_interval`.
7778
It must be noted that the `patience` parameter counts the number of

docs/source/fast_training.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ Set validation check frequency within 1 training epoch
4040
------------------------------------------------------
4141
For large datasets it's often desirable to check validation multiple times within a training loop.
4242
Pass in a float to check that often within 1 training epoch. Pass in an int k to check every k training batches.
43-
Must use an int if using an IterableDataset.
43+
Must use an `int` if using an `IterableDataset`.
4444

4545
.. testcode::
4646

@@ -50,7 +50,7 @@ Must use an int if using an IterableDataset.
5050
# check every .25 of an epoch
5151
trainer = Trainer(val_check_interval=0.25)
5252

53-
# check every 100 train batches (ie: for IterableDatasets or fixed frequency)
53+
# check every 100 train batches (ie: for `IterableDatasets` or fixed frequency)
5454
trainer = Trainer(val_check_interval=100)
5555

5656
----------------

docs/source/logging.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ Use the :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method to
6666
def training_step(self, batch, batch_idx):
6767
self.log('my_metric', x)
6868
69-
Depending on where log is called from, Lightning auto-determines the correct logging mode for you.\
69+
Depending on where log is called from, Lightning auto-determines the correct logging mode for you. \
7070
But of course you can override the default behavior by manually setting the :func:`~~pytorch_lightning.core.lightning.LightningModule.log` parameters.
7171

7272
.. code-block:: python
@@ -76,16 +76,16 @@ But of course you can override the default behavior by manually setting the :fun
7676
7777
The :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method has a few options:
7878

79-
* on_step: Logs the metric at the current step. Defaults to True in :func:`~~pytorch_lightning.core.lightning.LightningModule.training_step`, and :func:`~pytorch_lightning.core.lightning.LightningModule.training_step_end`.
79+
* `on_step`: Logs the metric at the current step. Defaults to `True` in :func:`~~pytorch_lightning.core.lightning.LightningModule.training_step`, and :func:`~pytorch_lightning.core.lightning.LightningModule.training_step_end`.
8080

81-
* on_epoch: Automatically accumulates and logs at the end of the epoch. Defaults to True anywhere in validation or test loops, and in :func:`~~pytorch_lightning.core.lightning.LightningModule.training_epoch_end`.
81+
* `on_epoch`: Automatically accumulates and logs at the end of the epoch. Defaults to True anywhere in validation or test loops, and in :func:`~~pytorch_lightning.core.lightning.LightningModule.training_epoch_end`.
8282

83-
* prog_bar: Logs to the progress bar.
83+
* `prog_bar`: Logs to the progress bar.
8484

85-
* logger: Logs to the logger like Tensorboard, or any other custom logger passed to the :class:`~pytorch_lightning.trainer.trainer.Trainer`.
85+
* `logger`: Logs to the logger like Tensorboard, or any other custom logger passed to the :class:`~pytorch_lightning.trainer.trainer.Trainer`.
8686

8787

88-
.. note:: Setting on_epoch=True will accumulate your logged values over the full training epoch.
88+
.. note:: Setting `on_epoch=True` will accumulate your logged values over the full training epoch.
8989

9090

9191
Manual logging

0 commit comments

Comments
 (0)