Skip to content

Commit 15eb77b

Browse files
sayakpaulstevhliu
andauthored
Update distributed_inference.md to include a fuller example on distributed inference (#9152)
* Update distributed_inference.md * Update docs/source/en/training/distributed_inference.md Co-authored-by: Steven Liu <[email protected]> --------- Co-authored-by: Steven Liu <[email protected]>
1 parent 413ca29 commit 15eb77b

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/source/en/training/distributed_inference.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ accelerate launch run_distributed.py --num_processes=2
4848

4949
<Tip>
5050

51-
To learn more, take a look at the [Distributed Inference with 🤗 Accelerate](https://huggingface.co/docs/accelerate/en/usage_guides/distributed_inference#distributed-inference-with-accelerate) guide.
51+
Refer to this minimal example [script](https://gist.github.com/sayakpaul/cfaebd221820d7b43fae638b4dfa01ba) for running inference across multiple GPUs. To learn more, take a look at the [Distributed Inference with 🤗 Accelerate](https://huggingface.co/docs/accelerate/en/usage_guides/distributed_inference#distributed-inference-with-accelerate) guide.
5252

5353
</Tip>
5454

@@ -108,4 +108,4 @@ torchrun run_distributed.py --nproc_per_node=2
108108
```
109109

110110
> [!TIP]
111-
> You can use `device_map` within a [`DiffusionPipeline`] to distribute its model-level components on multiple devices. Refer to the [Device placement](../tutorials/inference_with_big_models#device-placement) guide to learn more.
111+
> You can use `device_map` within a [`DiffusionPipeline`] to distribute its model-level components on multiple devices. Refer to the [Device placement](../tutorials/inference_with_big_models#device-placement) guide to learn more.

0 commit comments

Comments
 (0)