Skip to content

Commit 06c6af6

Browse files
committed
Fix link
1 parent 55a6fde commit 06c6af6

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

docs/source-fabric/advanced/compile.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -168,7 +168,7 @@ Here is an example:
168168
The advantage here is that `parallelize` is called when sharding the model,
169169
so :func:`torch.compile` is guaranteed to run on model shards and capture distributed operations.
170170

171-
Also, when using other libraries like `torch ao <https://github.com/pytorch/ao>`
171+
Also, when using other libraries like `torch ao <https://github.com/pytorch/ao>`_
172172
that need to be applied in a similar fashion, it's easy to reason about the sequence of calls
173173
needed to achieve the equivalent of `compile(distributed(quantized(model)))`:
174174

docs/source-pytorch/advanced/compile.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -198,7 +198,7 @@ Here is an example:
198198
The advantage here is that `configure_model` is called when sharding the model,
199199
so :func:`torch.compile` is guaranteed to run on model shards and capture distributed operations.
200200

201-
Also, when using other libraries like `torch ao <https://github.com/pytorch/ao>`
201+
Also, when using other libraries like `torch ao <https://github.com/pytorch/ao>`_
202202
that need to be applied in a similar fashion, it's easy to reason about the sequence of calls
203203
needed to achieve the equivalent of `compile(distributed(quantized(model)))`:
204204

0 commit comments

Comments
 (0)