Skip to content

Commit 5a9b5e3

Browse files
shenoynikhilNikhil ShenoyNikhil ShenoyNikhil Shenoy
authored
Fixes in sphinx docs links (#16255)
Co-authored-by: Nikhil Shenoy <[email protected]> Co-authored-by: Nikhil Shenoy <[email protected]> Co-authored-by: Nikhil Shenoy <[email protected]> Fixes #8107
1 parent f24349b commit 5a9b5e3

File tree

10 files changed

+23
-12
lines changed

10 files changed

+23
-12
lines changed

docs/source-pytorch/accelerators/ipu_intermediate.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -60,4 +60,4 @@ Lightning supports dumping all reports to a directory to open using the tool.
6060
trainer = pl.Trainer(accelerator="ipu", devices=8, strategy=IPUStrategy(autoreport_dir="report_dir/"))
6161
trainer.fit(model)
6262
63-
This will dump all reports to ``report_dir/`` which can then be opened using the Graph Analyser Tool, see `Opening Reports <https://docs.graphcore.ai/projects/graph-analyser-userguide/en/latest/graph-analyser.html#opening-reports>`__.
63+
This will dump all reports to ``report_dir/`` which can then be opened using the Graph Analyser Tool, see `Opening Reports <https://docs.graphcore.ai/projects/graph-analyser-userguide/en/latest/opening-reports.html>`__.

docs/source-pytorch/advanced/model_parallel.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -614,7 +614,7 @@ DeepSpeed ZeRO Stage 3
614614
======================
615615

616616
DeepSpeed ZeRO Stage 3 shards the optimizer states, gradients and the model parameters (also optionally activations). Sharding model parameters and activations comes with an increase in distributed communication, however allows you to scale your models massively from one GPU to multiple GPUs.
617-
**The DeepSpeed team report the ability to fine-tune models with over 40B parameters on a single GPU and over 2 Trillion parameters on 512 GPUs.** For more information we suggest checking the `DeepSpeed ZeRO-3 Offload documentation <https://www.deepspeed.ai/news/2021/03/07/zero3-offload.html>`__.
617+
**The DeepSpeed team report the ability to fine-tune models with over 40B parameters on a single GPU and over 2 Trillion parameters on 512 GPUs.** For more information we suggest checking the `DeepSpeed ZeRO-3 Offload documentation <https://www.deepspeed.ai/2021/03/07/zero3-offload.html>`__.
618618

619619
We've ran benchmarks for all these features and given a simple example of how all these features work in Lightning, which you can see at `minGPT <https://github.com/SeanNaren/minGPT/tree/stage3>`_.
620620

docs/source-pytorch/common/lightning_module.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1210,7 +1210,7 @@ and the Trainer will apply Truncated Backprop to it.
12101210

12111211
(`Williams et al. "An efficient gradient-based algorithm for on-line training of
12121212
recurrent network trajectories."
1213-
<http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.56.7941&rep=rep1&type=pdf>`_)
1213+
<https://ieeexplore.ieee.org/document/6797135>`_)
12141214

12151215
`Tutorial <https://d2l.ai/chapter_recurrent-neural-networks/bptt.html>`_
12161216

docs/source-pytorch/conf.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -412,3 +412,11 @@ def package_list_from_file(file):
412412

413413
# ignore all links in any CHANGELOG file
414414
linkcheck_exclude_documents = [r"^(.*\/)*CHANGELOG.*$"]
415+
416+
# ignore the following relative links (false positive errors during linkcheck)
417+
linkcheck_ignore = [
418+
r"^starter/installation.html$",
419+
r"^installation.html$",
420+
r"^../cli/lightning_cli.html$",
421+
r"^../common/trainer.html#trainer-flags$",
422+
]

docs/source-pytorch/ecosystem/transformers.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ In Lightning Transformers, we offer the following benefits:
1616
- Backed by `HuggingFace Transformers <https://huggingface.co/transformers/>`_ models and datasets, spanning multiple modalities and tasks within NLP/Audio and Vision.
1717
- Task Abstraction for Rapid Research & Experimentation - Build your own custom transformer tasks across all modalities with little friction.
1818
- Powerful config composition backed by `Hydra <https://hydra.cc/>`_ - simply swap out models, optimizers, schedulers task, and many more configurations without touching the code.
19-
- Seamless Memory and Speed Optimizations - Out-of-the-box training optimizations such as `DeepSpeed ZeRO <https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html#deepspeed>`_ or `FairScale Sharded Training <https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html#sharded-training>`_ with no code changes.
19+
- Seamless Memory and Speed Optimizations - Out-of-the-box training optimizations such as `DeepSpeed ZeRO <https://pytorch-lightning.readthedocs.io/en/stable/advanced/model_parallel.html#deepspeed>`_ or `FairScale Sharded Training <https://pytorch-lightning.readthedocs.io/en/stable/advanced/model_parallel.html#fairscale-sharded-training>`_ with no code changes.
2020

2121
-----------------
2222

docs/source-pytorch/extensions/callbacks.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -61,10 +61,10 @@ Examples
6161
********
6262
You can do pretty much anything with callbacks.
6363

64-
- `Add a MLP to fine-tune self-supervised networks <https://lightning-bolts.readthedocs.io/en/stable/deprecated/callbacks/self_supervised.html#sslonlineevaluator>`_.
65-
- `Find how to modify an image input to trick the classification result <https://lightning-bolts.readthedocs.io/en/stable/deprecated/callbacks/vision.html#confused-logit>`_.
66-
- `Interpolate the latent space of any variational model <https://lightning-bolts.readthedocs.io/en/stable/deprecated/callbacks/variational.html#latent-dim-interpolator>`_.
67-
- `Log images to Tensorboard for any model <https://lightning-bolts.readthedocs.io/en/stable/deprecated/callbacks/vision.html#tensorboard-image-generator>`_.
64+
- `Add a MLP to fine-tune self-supervised networks <https://lightning-bolts.readthedocs.io/en/latest/callbacks/self_supervised.html#sslonlineevaluator>`_.
65+
- `Find how to modify an image input to trick the classification result <https://lightning-bolts.readthedocs.io/en/latest/callbacks/vision.html#confused-logit>`_.
66+
- `Interpolate the latent space of any variational model <https://lightning-bolts.readthedocs.io/en/latest/callbacks/variational.html#latent-dim-interpolator>`_.
67+
- `Log images to Tensorboard for any model <https://lightning-bolts.readthedocs.io/en/latest/callbacks/vision.html#tensorboard-image-generator>`_.
6868

6969

7070
--------------

docs/source-pytorch/fabric/fabric.rst

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,9 @@ Here is how you run DDP with 8 GPUs and `torch.bfloat16 <https://pytorch.org/doc
115115
116116
lightning run model ./path/to/train.py --strategy=ddp --devices=8 --accelerator=cuda --precision="bf16"
117117
118-
Or `DeepSpeed Zero3 <https://www.deepspeed.ai/news/2021/03/07/zero3-offload.html>`_ with mixed precision:
118+
119+
Or `DeepSpeed Zero3 <https://www.deepspeed.ai/2021/03/07/zero3-offload.html>`_ with mixed precision:
120+
119121

120122
.. code-block:: bash
121123

src/pytorch_lightning/callbacks/pruning.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -261,7 +261,7 @@ def _wrap_pruning_fn(pruning_fn: Callable, **kwargs: Any) -> Callable:
261261
def make_pruning_permanent(self, module: nn.Module) -> None:
262262
"""Removes pruning buffers from any pruned modules.
263263
264-
Adapted from https://github.com/pytorch/pytorch/blob/1.7.1/torch/nn/utils/prune.py#L1176-L1180
264+
Adapted from https://github.com/pytorch/pytorch/blob/v1.7.1/torch/nn/utils/prune.py#L1118-L1122
265265
"""
266266
for _, module in module.named_modules():
267267
for k in list(module._forward_pre_hooks):

src/pytorch_lightning/callbacks/quantization.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,8 @@ def custom_trigger_last(trainer):
151151
not be controlled by the callback.
152152
153153
.. _PyTorch Quantization: https://pytorch.org/docs/stable/quantization.html#quantization-aware-training
154-
.. _torch.quantization.QConfig: https://pytorch.org/docs/stable/torch.quantization.html#torch.quantization.QConfig
154+
.. _torch.quantization.QConfig:
155+
https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.QConfig.html#qconfig
155156
"""
156157

157158
OBSERVER_TYPES = ("histogram", "average")

src/pytorch_lightning/strategies/ipu.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ def __init__(
6464
6565
device_iterations: Number of iterations to run on device at once before returning to host.
6666
This can be used as an optimization to speed up training.
67-
https://docs.graphcore.ai/projects/poptorch-user-guide/en/0.1.67/batching.html
67+
https://docs.graphcore.ai/projects/poptorch-user-guide/en/latest/batching.html
6868
autoreport: Enable auto-reporting for IPUs using PopVision
6969
https://docs.graphcore.ai/projects/graphcore-popvision-user-guide/en/latest/graph/graph.html
7070
autoreport_dir: Optional directory to store autoReport output.

0 commit comments

Comments
 (0)