Skip to content

Commit 7106048

Browse files
Fixes broken LM links (#4117)
1 parent 2d5a7f5 commit 7106048

File tree

8 files changed

+13
-13
lines changed

8 files changed

+13
-13
lines changed

docs/source/callbacks.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Callback
2222
A callback is a self-contained program that can be reused across projects.
2323

2424
Lightning has a callback system to execute callbacks when needed. Callbacks should capture NON-ESSENTIAL
25-
logic that is NOT required for your :class:`~pytorch_lightning.core.LightningModule` to run.
25+
logic that is NOT required for your :ref:`lightning_module` to run.
2626

2727
Here's the flow of how the callback hooks are executed:
2828

@@ -63,7 +63,7 @@ Example:
6363
trainer is init now
6464

6565
We successfully extended functionality without polluting our super clean
66-
:class:`~pytorch_lightning.core.LightningModule` research code.
66+
:ref:`lightning_module` research code.
6767

6868
-----------
6969

docs/source/converting.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ To enable your code to work with Lightning, here's how to organize PyTorch into
1616

1717
1. Move your computational code
1818
===============================
19-
Move the model architecture and forward pass to your :class:`~pytorch_lightning.core.LightningModule`.
19+
Move the model architecture and forward pass to your :ref:`lightning_module`.
2020

2121
.. testcode::
2222

@@ -115,4 +115,4 @@ The test loop will not be used until you call.
115115

116116
6. Remove any .cuda() or to.device() calls
117117
==========================================
118-
Your :class:`~pytorch_lightning.core.LightningModule` can automatically run on any hardware!
118+
Your :ref:`lightning_module` can automatically run on any hardware!

docs/source/introduction_guide.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ The research
5757
The Model
5858
---------
5959

60-
The :class:`~pytorch_lightning.core.LightningModule` holds all the core research ingredients:
60+
The :ref:`lightning_module` holds all the core research ingredients:
6161

6262
- The model
6363

@@ -98,7 +98,7 @@ Let's first start with the model. In this case we'll design a 3-layer neural net
9898
x = F.log_softmax(x, dim=1)
9999
return x
100100

101-
Notice this is a :class:`~pytorch_lightning.core.LightningModule` instead of a ``torch.nn.Module``. A LightningModule is
101+
Notice this is a :ref:`lightning_module` instead of a ``torch.nn.Module``. A LightningModule is
102102
equivalent to a pure PyTorch Module except it has added functionality. However, you can use it **EXACTLY** the same as you would a PyTorch Module.
103103

104104
.. testcode::

docs/source/logging.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ Lightning offers automatic log functionalities for logging scalars, or manual lo
5959

6060
Automatic logging
6161
=================
62-
Use the :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method to log from anywhere in a :class:`~pytorch_lightning.core.LightningModule`.
62+
Use the :func:`~~pytorch_lightning.core.lightning.LightningModule.log` method to log from anywhere in a :ref:`lightning_module`.
6363

6464
.. code-block:: python
6565

docs/source/lr_finder.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,10 +37,10 @@ initial lr.
3737
Using Lightning's built-in LR finder
3838
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3939

40-
To enable the learning rate finder, your :class:`~pytorch_lightning.core.LightningModule` needs to have a ``learning_rate`` or ``lr`` property.
40+
To enable the learning rate finder, your :ref:`lightning_module` needs to have a ``learning_rate`` or ``lr`` property.
4141
Then, set ``Trainer(auto_lr_find=True)`` during trainer construction,
4242
and then call ``trainer.tune(model)`` to run the LR finder. The suggested ``learning_rate``
43-
will be written to the console and will be automatically set to your :class:`~pytorch_lightning.core.LightningModule`,
43+
will be written to the console and will be automatically set to your :ref:`lightning_module`,
4444
which can be accessed via ``self.learning_rate`` or ``self.lr``.
4545

4646
.. code-block:: python

docs/source/new-project.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ Step 1: Define LightningModule
122122
123123
**SYTEM VS MODEL**
124124

125-
A :class:`~pytorch_lightning.core.LightningModule` defines a *system* not a model.
125+
A :ref:`lightning_module` defines a *system* not a model.
126126

127127
.. figure:: https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/pl_docs/model_system.png
128128
:width: 400
@@ -198,7 +198,7 @@ First, define the data however you want. Lightning just needs a :class:`~torch.u
198198
dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor())
199199
train_loader = DataLoader(dataset)
200200
201-
Next, init the :class:`~pytorch_lightning.core.LightningModule` and the PyTorch Lightning :class:`~pytorch_lightning.trainer.Trainer`,
201+
Next, init the :ref:`lightning_module` and the PyTorch Lightning :class:`~pytorch_lightning.trainer.Trainer`,
202202
then call fit with both the data and model.
203203

204204
.. code-block:: python

docs/source/slurm.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Multi-node training
1717
-------------------
1818
To train a model using multiple nodes, do the following:
1919

20-
1. Design your :class:`~pytorch_lightning.core.LightningModule`.
20+
1. Design your :ref:`lightning_module`.
2121

2222
2. Enable ddp in the trainer
2323

docs/source/test_set.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ running the test set (ie: 16-bit, dp, ddp, etc...)
7070
Test with additional data loaders
7171
---------------------------------
7272
You can still run inference on a test set even if the `test_dataloader` method hasn't been
73-
defined within your :class:`~pytorch_lightning.core.LightningModule` instance. This would be the case when your test data
73+
defined within your :ref:`lightning_module` instance. This would be the case when your test data
7474
is not available at the time your model was declared.
7575

7676
.. code-block:: python

0 commit comments

Comments
 (0)