-
Couldn't load subscription status.
- Fork 6.5k
[docs] Distributed inference #12285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[docs] Distributed inference #12285
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -12,51 +12,55 @@ specific language governing permissions and limitations under the License. | |
|
|
||
| # Distributed inference | ||
|
|
||
| On distributed setups, you can run inference across multiple GPUs with 🤗 [Accelerate](https://huggingface.co/docs/accelerate/index) or [PyTorch Distributed](https://pytorch.org/tutorials/beginner/dist_overview.html), which is useful for generating with multiple prompts in parallel. | ||
| Distributed inference splits the workload across multiple GPUs. It a useful technique for fitting larger models in memory and can process multiple prompts for higher throughput. | ||
|
|
||
| This guide will show you how to use 🤗 Accelerate and PyTorch Distributed for distributed inference. | ||
| This guide will show you how to use [Accelerate](https://huggingface.co/docs/accelerate/index) and [PyTorch Distributed](https://pytorch.org/tutorials/beginner/dist_overview.html) for distributed inference. | ||
|
|
||
| ## 🤗 Accelerate | ||
| ## Accelerate | ||
|
|
||
| 🤗 [Accelerate](https://huggingface.co/docs/accelerate/index) is a library designed to make it easy to train or run inference across distributed setups. It simplifies the process of setting up the distributed environment, allowing you to focus on your PyTorch code. | ||
| Accelerate is a library designed to simplify inference and training on multiple accelerators by handling the setup, allowing users to focus on their PyTorch code. | ||
|
|
||
| To begin, create a Python file and initialize an [`accelerate.PartialState`] to create a distributed environment; your setup is automatically detected so you don't need to explicitly define the `rank` or `world_size`. Move the [`DiffusionPipeline`] to `distributed_state.device` to assign a GPU to each process. | ||
| Install Accelerate with the following command. | ||
|
|
||
| Now use the [`~accelerate.PartialState.split_between_processes`] utility as a context manager to automatically distribute the prompts between the number of processes. | ||
| ```bash | ||
| uv pip install accelerate | ||
| ``` | ||
|
|
||
| Initialize a [`accelerate.PartialState`] class in a Python file to create a distributed environment. The [`accelerate.PartialState`] class manages process management, device control and distribution, and process coordination. | ||
|
|
||
| Move the [`DiffusionPipeline`] to [`accelerate.PartialState.device`] to assign a GPU to each process. | ||
|
|
||
| ```py | ||
| import torch | ||
| from accelerate import PartialState | ||
| from diffusers import DiffusionPipeline | ||
|
|
||
| pipeline = DiffusionPipeline.from_pretrained( | ||
| "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True | ||
| "Qwen/Qwen-Image", torch_dtype=torch.float16 | ||
| ) | ||
| distributed_state = PartialState() | ||
| pipeline.to(distributed_state.device) | ||
| ``` | ||
|
|
||
| Use the [`~accelerate.PartialState.split_between_processes`] utility as a context manager to automatically distribute the prompts between the number of processes. | ||
|
|
||
| ```py | ||
| with distributed_state.split_between_processes(["a dog", "a cat"]) as prompt: | ||
| result = pipeline(prompt).images[0] | ||
| result.save(f"result_{distributed_state.process_index}.png") | ||
| ``` | ||
|
|
||
| Use the `--num_processes` argument to specify the number of GPUs to use, and call `accelerate launch` to run the script: | ||
| Call `accelerate launch` to run the script and use the `--num_processes` argument to set the number of GPUs to use. | ||
|
|
||
| ```bash | ||
| accelerate launch run_distributed.py --num_processes=2 | ||
| ``` | ||
|
|
||
| <Tip> | ||
|
|
||
| Refer to this minimal example [script](https://gist.github.com/sayakpaul/cfaebd221820d7b43fae638b4dfa01ba) for running inference across multiple GPUs. To learn more, take a look at the [Distributed Inference with 🤗 Accelerate](https://huggingface.co/docs/accelerate/en/usage_guides/distributed_inference#distributed-inference-with-accelerate) guide. | ||
|
|
||
| </Tip> | ||
|
|
||
| ## PyTorch Distributed | ||
|
|
||
| PyTorch supports [`DistributedDataParallel`](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) which enables data parallelism. | ||
| PyTorch [DistributedDataParallel](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) enables [data parallelism](https://huggingface.co/spaces/nanotron/ultrascale-playbook?section=data_parallelism), which replicates the same model on each device, to process different batches of data in parallel. | ||
|
|
||
| To start, create a Python file and import `torch.distributed` and `torch.multiprocessing` to set up the distributed process group and to spawn the processes for inference on each GPU. You should also initialize a [`DiffusionPipeline`]: | ||
| Import `torch.distributed` and `torch.multiprocessing` into a Python file to set up the distributed process group and to spawn the processes for inference on each GPU. | ||
|
|
||
| ```py | ||
| import torch | ||
|
|
@@ -65,20 +69,20 @@ import torch.multiprocessing as mp | |
|
|
||
| from diffusers import DiffusionPipeline | ||
|
|
||
| sd = DiffusionPipeline.from_pretrained( | ||
| "stable-diffusion-v1-5/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True | ||
| pipeline = DiffusionPipeline.from_pretrained( | ||
| "Qwen/Qwen-Image", torch_dtype=torch.float16, | ||
| ) | ||
| ``` | ||
|
|
||
| You'll want to create a function to run inference; [`init_process_group`](https://pytorch.org/docs/stable/distributed.html?highlight=init_process_group#torch.distributed.init_process_group) handles creating a distributed environment with the type of backend to use, the `rank` of the current process, and the `world_size` or the number of processes participating. If you're running inference in parallel over 2 GPUs, then the `world_size` is 2. | ||
| Create a function for inference with [init_process_group](https://pytorch.org/docs/stable/distributed.html?highlight=init_process_group#torch.distributed.init_process_group). This method creates a distributed environment with the backend type, the `rank` of the current process, and the `world_size` or number of processes participating (for example, 2 GPUs would be `world_size=2`). | ||
|
|
||
| Move the [`DiffusionPipeline`] to `rank` and use `get_rank` to assign a GPU to each process, where each process handles a different prompt: | ||
| Move the pipeline to `rank` and use `get_rank` to assign a GPU to each process. Each process handles a different prompt. | ||
|
|
||
| ```py | ||
| def run_inference(rank, world_size): | ||
| dist.init_process_group("nccl", rank=rank, world_size=world_size) | ||
|
|
||
| sd.to(rank) | ||
| pipeline.to(rank) | ||
|
|
||
| if torch.distributed.get_rank() == 0: | ||
| prompt = "a dog" | ||
|
|
@@ -89,7 +93,7 @@ def run_inference(rank, world_size): | |
| image.save(f"./{'_'.join(prompt)}.png") | ||
| ``` | ||
|
|
||
| To run the distributed inference, call [`mp.spawn`](https://pytorch.org/docs/stable/multiprocessing.html#torch.multiprocessing.spawn) to run the `run_inference` function on the number of GPUs defined in `world_size`: | ||
| Use [mp.spawn](https://pytorch.org/docs/stable/multiprocessing.html#torch.multiprocessing.spawn) to create the number of processes defined in `world_size`. | ||
|
|
||
| ```py | ||
| def main(): | ||
|
|
@@ -101,31 +105,26 @@ if __name__ == "__main__": | |
| main() | ||
| ``` | ||
|
|
||
| Once you've completed the inference script, use the `--nproc_per_node` argument to specify the number of GPUs to use and call `torchrun` to run the script: | ||
| Call `torchrun` to run the inference script and use the `--nproc_per_node` argument to set the number of GPUs to use. | ||
|
|
||
| ```bash | ||
| torchrun run_distributed.py --nproc_per_node=2 | ||
| ``` | ||
|
|
||
| > [!TIP] | ||
| > You can use `device_map` within a [`DiffusionPipeline`] to distribute its model-level components on multiple devices. Refer to the [Device placement](../tutorials/inference_with_big_models#device-placement) guide to learn more. | ||
|
|
||
| ## Model sharding | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What happened here? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I removed it because it seems more like a "recipe" for progressively and strategically fitting models on a GPU by loading and removing them. I don't think a user is really learning anything new/useful about I would suggest removing it or at least moving it to Resources > Task Recipes. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Removal doesn't sound right as the content is useful IMO. Including it in "Task Recipes" might also hamper its discoverability. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, I added it back :) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Looks like it's still discarded? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Should be here now! |
||
| ## device_map | ||
|
|
||
| Modern diffusion systems such as [Flux](../api/pipelines/flux) are very large and have multiple models. For example, [Flux.1-Dev](https://hf.co/black-forest-labs/FLUX.1-dev) is made up of two text encoders - [T5-XXL](https://hf.co/google/t5-v1_1-xxl) and [CLIP-L](https://hf.co/openai/clip-vit-large-patch14) - a [diffusion transformer](../api/models/flux_transformer), and a [VAE](../api/models/autoencoderkl). With a model this size, it can be challenging to run inference on consumer GPUs. | ||
| The `device_map` argument enables distributed inference by automatically placing model components on separate GPUs. This is especially useful when a model doesn't fit on a single GPU. You can use `device_map` to selectively load and unload the required model components at a given stage as shown in the example below (assumes two GPUs are available). | ||
|
|
||
| Model sharding is a technique that distributes models across GPUs when the models don't fit on a single GPU. The example below assumes two 16GB GPUs are available for inference. | ||
|
|
||
| Start by computing the text embeddings with the text encoders. Keep the text encoders on two GPUs by setting `device_map="balanced"`. The `balanced` strategy evenly distributes the model on all available GPUs. Use the `max_memory` parameter to allocate the maximum amount of memory for each text encoder on each GPU. | ||
|
|
||
| > [!TIP] | ||
| > **Only** load the text encoders for this step! The diffusion transformer and VAE are loaded in a later step to preserve memory. | ||
| Set `device_map="balanced"` to evenly distributes the text encoders on all available GPUs. You can use the `max_memory` argument to allocate a maximum amount of memory for each text encoder. Don't load any other pipeline components to avoid memory usage. | ||
|
|
||
| ```py | ||
| from diffusers import FluxPipeline | ||
| import torch | ||
|
|
||
| prompt = "a photo of a dog with cat-like look" | ||
| prompt = """ | ||
| cinematic film still of a cat sipping a margarita in a pool in Palm Springs, California | ||
| highly detailed, high budget hollywood movie, cinemascope, moody, epic, gorgeous, film grain | ||
| """ | ||
|
|
||
| pipeline = FluxPipeline.from_pretrained( | ||
| "black-forest-labs/FLUX.1-dev", | ||
|
|
@@ -142,7 +141,7 @@ with torch.no_grad(): | |
| ) | ||
| ``` | ||
|
|
||
| Once the text embeddings are computed, remove them from the GPU to make space for the diffusion transformer. | ||
| After the text embeddings are computed, remove them from the GPU to make space for the diffusion transformer. | ||
|
|
||
| ```py | ||
| import gc | ||
|
|
@@ -162,7 +161,7 @@ del pipeline | |
| flush() | ||
| ``` | ||
|
|
||
| Load the diffusion transformer next which has 12.5B parameters. This time, set `device_map="auto"` to automatically distribute the model across two 16GB GPUs. The `auto` strategy is backed by [Accelerate](https://hf.co/docs/accelerate/index) and available as a part of the [Big Model Inference](https://hf.co/docs/accelerate/concept_guides/big_model_inference) feature. It starts by distributing a model across the fastest device first (GPU) before moving to slower devices like the CPU and hard drive if needed. The trade-off of storing model parameters on slower devices is slower inference latency. | ||
| Set `device_map="auto"` to automatically distribute the model on the two GPUs. This strategy places a model on the fastest device first before placing a model on a slower device like a CPU or hard drive if needed. The trade-off of storing model parameters on slower devices is slower inference latency. | ||
|
|
||
| ```py | ||
| from diffusers import AutoModel | ||
|
|
@@ -177,9 +176,9 @@ transformer = AutoModel.from_pretrained( | |
| ``` | ||
|
|
||
| > [!TIP] | ||
| > At any point, you can try `print(pipeline.hf_device_map)` to see how the various models are distributed across devices. This is useful for tracking the device placement of the models. You can also try `print(transformer.hf_device_map)` to see how the transformer model is sharded across devices. | ||
| > Run `pipeline.hf_device_map` to see how the various models are distributed across devices. This is useful for tracking model device placement. You can also call `hf_device_map` on the transformer model to see how it is distributed. | ||
|
|
||
| Add the transformer model to the pipeline for denoising, but set the other model-level components like the text encoders and VAE to `None` because you don't need them yet. | ||
| Add the transformer model to the pipeline and set the `output_type="latent"` to generate the latents. | ||
|
|
||
| ```py | ||
| pipeline = FluxPipeline.from_pretrained( | ||
|
|
@@ -206,21 +205,12 @@ latents = pipeline( | |
| ).images | ||
| ``` | ||
|
|
||
| Remove the pipeline and transformer from memory as they're no longer needed. | ||
|
|
||
| ```py | ||
| del pipeline.transformer | ||
| del pipeline | ||
|
|
||
| flush() | ||
| ``` | ||
|
|
||
| Finally, decode the latents with the VAE into an image. The VAE is typically small enough to be loaded on a single GPU. | ||
| Remove the pipeline and transformer from memory and load a VAE to decode the latents. The VAE is typically small enough to be loaded on a single device. | ||
|
|
||
| ```py | ||
| import torch | ||
| from diffusers import AutoencoderKL | ||
| from diffusers.image_processor import VaeImageProcessor | ||
| import torch | ||
|
|
||
| vae = AutoencoderKL.from_pretrained(ckpt_id, subfolder="vae", torch_dtype=torch.bfloat16).to("cuda") | ||
| vae_scale_factor = 2 ** (len(vae.config.block_out_channels) - 1) | ||
|
|
@@ -236,4 +226,8 @@ with torch.no_grad(): | |
| image[0].save("split_transformer.png") | ||
| ``` | ||
|
|
||
| By selectively loading and unloading the models you need at a given stage and sharding the largest models across multiple GPUs, it is possible to run inference with large models on consumer GPUs. | ||
| ## Resources | ||
|
|
||
| - Take a look at this [script](https://gist.github.com/sayakpaul/cfaebd221820d7b43fae638b4dfa01ba) for a minimal example of distributed inference with Accelerate. | ||
| - For more details, check out Accelerate's [Distributed inference](https://huggingface.co/docs/accelerate/en/usage_guides/distributed_inference#distributed-inference-with-accelerate) guide. | ||
| - The `device_map` argument assign models or an entire pipeline to devices. Refer to the [device placement](../using-diffusers/loading#device-placement) docs for more information. | ||
Uh oh!
There was an error while loading. Please reload this page.