Skip to content

Commit 5be9d3c

Browse files
committed
Improve links to autogram engine
1 parent a4f37b3 commit 5be9d3c

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

docs/source/examples/iwrm.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ backpropagates the Jacobian of each sample's loss. It uses an
1818
``.grad`` fields of the model's parameters. Because it has to store the full Jacobian, this approach
1919
uses a lot of memory.
2020

21-
The recommended approach, called the :doc:`autogram <../docs/autogram/index>` engine, works by
21+
The recommended approach, called the :doc:`autogram engine <../docs/autogram/engine>`, works by
2222
backpropagating the Gramian of the Jacobian of each sample's loss with respect to the model's
2323
parameters. This method is more memory-efficient and generally much faster because it avoids
2424
storing the full Jacobians. A vector of weights is then computed by applying a

docs/source/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ the gradient of the obtained weighted loss. The iterative computation of the Gra
4646
Algorithm 3 of
4747
`Jacobian Descent For Multi-Objective Optimization <https://arxiv.org/pdf/2406.16232>`_. The
4848
documentation and usage example of this algorithm is provided in
49-
:doc:`Engine <docs/autogram/engine>`.
49+
:doc:`autogram.Engine <docs/autogram/engine>`.
5050

5151
TorchJD is open-source, under MIT License. The source code is available on
5252
`GitHub <https://github.com/TorchJD/torchjd>`_.

0 commit comments

Comments
 (0)