File tree Expand file tree Collapse file tree 2 files changed +2
-2
lines changed Expand file tree Collapse file tree 2 files changed +2
-2
lines changed Original file line number Diff line number Diff line change @@ -18,7 +18,7 @@ backpropagates the Jacobian of each sample's loss. It uses an
1818``.grad `` fields of the model's parameters. Because it has to store the full Jacobian, this approach
1919uses a lot of memory.
2020
21- The recommended approach, called the :doc: `autogram <../docs/autogram/index >` engine , works by
21+ The recommended approach, called the :doc: `autogram engine <../docs/autogram/engine >` , works by
2222backpropagating the Gramian of the Jacobian of each sample's loss with respect to the model's
2323parameters. This method is more memory-efficient and generally much faster because it avoids
2424storing the full Jacobians. A vector of weights is then computed by applying a
Original file line number Diff line number Diff line change @@ -46,7 +46,7 @@ the gradient of the obtained weighted loss. The iterative computation of the Gra
4646Algorithm 3 of
4747`Jacobian Descent For Multi-Objective Optimization <https://arxiv.org/pdf/2406.16232 >`_. The
4848documentation and usage example of this algorithm is provided in
49- :doc: `Engine <docs/autogram/engine >`.
49+ :doc: `autogram. Engine <docs/autogram/engine >`.
5050
5151TorchJD is open-source, under MIT License. The source code is available on
5252`GitHub <https://github.com/TorchJD/torchjd >`_.
You can’t perform that action at this time.
0 commit comments