Skip to content

Commit 224e7d2

Browse files
committed
init
1 parent c2e5ece commit 224e7d2

File tree

1 file changed

+46
-3
lines changed

1 file changed

+46
-3
lines changed

docs/source/en/optimization/memory.md

Lines changed: 46 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -291,13 +291,53 @@ Group offloading moves groups of internal layers ([torch.nn.ModuleList](https://
291291
> [!WARNING]
292292
> Group offloading may not work with all models if the forward implementation contains weight-dependent device casting of inputs because it may clash with group offloading's device casting mechanism.
293293
294-
Call [`~ModelMixin.enable_group_offload`] to enable it for standard Diffusers model components that inherit from [`ModelMixin`]. For other model components that don't inherit from [`ModelMixin`], such as a generic [torch.nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), use [`~hooks.apply_group_offloading`] instead.
295-
296-
The `offload_type` parameter can be set to `block_level` or `leaf_level`.
294+
Enable group offloading by configuring the `offload_type` parameter to `block_level` or `leaf_level`.
297295

298296
- `block_level` offloads groups of layers based on the `num_blocks_per_group` parameter. For example, if `num_blocks_per_group=2` on a model with 40 layers, 2 layers are onloaded and offloaded at a time (20 total onloads/offloads). This drastically reduces memory requirements.
299297
- `leaf_level` offloads individual layers at the lowest level and is equivalent to [CPU offloading](#cpu-offloading). But it can be made faster if you use streams without giving up inference speed.
300298

299+
Group offloading is supported for entire pipelines or individual models. Applying group offloading to the entire pipeline is the easiest option while selectively applying it to individual models gives users more flexibility to use different offloading techniques for different models.
300+
301+
<hfoptions id="group-offloading">
302+
<hfoption id="pipeline">
303+
304+
Call [`~DiffusionPipeline.enable_group_offload`] on a pipeline.
305+
306+
```py
307+
import torch
308+
from diffusers import CogVideoXPipeline
309+
from diffusers.hooks import apply_group_offloading
310+
from diffusers.utils import export_to_video
311+
312+
onload_device = torch.device("cuda")
313+
offload_device = torch.device("cpu")
314+
315+
pipeline = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16)
316+
pipeline.enable_group_offload(
317+
onload_device=onload_device,
318+
offload_device=offload_device,
319+
offload_type="leaf_level",
320+
use_stream=True
321+
)
322+
323+
prompt = (
324+
"A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. "
325+
"The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other "
326+
"pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, "
327+
"casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. "
328+
"The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical "
329+
"atmosphere of this unique musical performance."
330+
)
331+
video = pipeline(prompt=prompt, guidance_scale=6, num_inference_steps=50).frames[0]
332+
print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} GB")
333+
export_to_video(video, "output.mp4", fps=8)
334+
```
335+
336+
</hfoption>
337+
<hfoption id="model">
338+
339+
Call [`~ModelMixin.enable_group_offload`] on standard Diffusers model components that inherit from [`ModelMixin`]. For other model components that don't inherit from [`ModelMixin`], such as a generic [torch.nn.Module](https://pytorch.org/docs/stable/generated/torch.nn.Module.html), use [`~hooks.apply_group_offloading`] instead.
340+
301341
```py
302342
import torch
303343
from diffusers import CogVideoXPipeline
@@ -328,6 +368,9 @@ print(f"Max memory reserved: {torch.cuda.max_memory_allocated() / 1024**3:.2f} G
328368
export_to_video(video, "output.mp4", fps=8)
329369
```
330370

371+
</hfoption>
372+
</hfoptions>
373+
331374
#### CUDA stream
332375

333376
The `use_stream` parameter can be activated for CUDA devices that support asynchronous data transfer streams to reduce overall execution time compared to [CPU offloading](#cpu-offloading). It overlaps data transfer and computation by using layer prefetching. The next layer to be executed is loaded onto the GPU while the current layer is still being executed. It can increase CPU memory significantly so ensure you have 2x the amount of memory as the model size.

0 commit comments

Comments
 (0)