Skip to content

Commit 9d138c9

Browse files
authored
Merge branch 'main' into rejig-peft-state-dict-kohya
2 parents 196ce48 + c4d4ac2 commit 9d138c9

File tree

289 files changed

+8142
-2394
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

289 files changed

+8142
-2394
lines changed

.github/workflows/nightly_tests.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -265,7 +265,7 @@ jobs:
265265
266266
- name: Run PyTorch CUDA tests
267267
env:
268-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
268+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
269269
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
270270
CUBLAS_WORKSPACE_CONFIG: :16:8
271271
run: |
@@ -505,7 +505,7 @@ jobs:
505505
# shell: arch -arch arm64 bash {0}
506506
# env:
507507
# HF_HOME: /System/Volumes/Data/mnt/cache
508-
# HF_TOKEN: ${{ secrets.HF_TOKEN }}
508+
# HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
509509
# run: |
510510
# ${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps \
511511
# --report-log=tests_torch_mps.log \
@@ -561,7 +561,7 @@ jobs:
561561
# shell: arch -arch arm64 bash {0}
562562
# env:
563563
# HF_HOME: /System/Volumes/Data/mnt/cache
564-
# HF_TOKEN: ${{ secrets.HF_TOKEN }}
564+
# HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
565565
# run: |
566566
# ${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps \
567567
# --report-log=tests_torch_mps.log \

.github/workflows/push_tests.yml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ jobs:
187187
188188
- name: Run Flax TPU tests
189189
env:
190-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
190+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
191191
run: |
192192
python -m pytest -n 0 \
193193
-s -v -k "Flax" \
@@ -235,7 +235,7 @@ jobs:
235235
236236
- name: Run ONNXRuntime CUDA tests
237237
env:
238-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
238+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
239239
run: |
240240
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
241241
-s -v -k "Onnx" \
@@ -283,7 +283,7 @@ jobs:
283283
python utils/print_env.py
284284
- name: Run example tests on GPU
285285
env:
286-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
286+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
287287
RUN_COMPILE: yes
288288
run: |
289289
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v -k "compile" --make-reports=tests_torch_compile_cuda tests/
@@ -326,7 +326,7 @@ jobs:
326326
python utils/print_env.py
327327
- name: Run example tests on GPU
328328
env:
329-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
329+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
330330
run: |
331331
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v -k "xformers" --make-reports=tests_torch_xformers_cuda tests/
332332
- name: Failure short reports
@@ -372,7 +372,7 @@ jobs:
372372
373373
- name: Run example tests on GPU
374374
env:
375-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
375+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
376376
run: |
377377
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
378378
python -m uv pip install timm

.github/workflows/release_tests_fast.yml

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ jobs:
8181
python utils/print_env.py
8282
- name: Slow PyTorch CUDA checkpoint tests on Ubuntu
8383
env:
84-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
84+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
8585
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
8686
CUBLAS_WORKSPACE_CONFIG: :16:8
8787
run: |
@@ -135,7 +135,7 @@ jobs:
135135
136136
- name: Run PyTorch CUDA tests
137137
env:
138-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
138+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
139139
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
140140
CUBLAS_WORKSPACE_CONFIG: :16:8
141141
run: |
@@ -186,7 +186,7 @@ jobs:
186186
187187
- name: Run PyTorch CUDA tests
188188
env:
189-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
189+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
190190
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
191191
CUBLAS_WORKSPACE_CONFIG: :16:8
192192
run: |
@@ -241,7 +241,7 @@ jobs:
241241
242242
- name: Run slow Flax TPU tests
243243
env:
244-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
244+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
245245
run: |
246246
python -m pytest -n 0 \
247247
-s -v -k "Flax" \
@@ -289,7 +289,7 @@ jobs:
289289
290290
- name: Run slow ONNXRuntime CUDA tests
291291
env:
292-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
292+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
293293
run: |
294294
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
295295
-s -v -k "Onnx" \
@@ -337,7 +337,7 @@ jobs:
337337
python utils/print_env.py
338338
- name: Run example tests on GPU
339339
env:
340-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
340+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
341341
RUN_COMPILE: yes
342342
run: |
343343
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v -k "compile" --make-reports=tests_torch_compile_cuda tests/
@@ -380,7 +380,7 @@ jobs:
380380
python utils/print_env.py
381381
- name: Run example tests on GPU
382382
env:
383-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
383+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
384384
run: |
385385
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v -k "xformers" --make-reports=tests_torch_xformers_cuda tests/
386386
- name: Failure short reports
@@ -426,7 +426,7 @@ jobs:
426426
427427
- name: Run example tests on GPU
428428
env:
429-
HF_TOKEN: ${{ secrets.HF_TOKEN }}
429+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
430430
run: |
431431
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
432432
python -m uv pip install timm

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -598,6 +598,8 @@
598598
title: Attention Processor
599599
- local: api/activations
600600
title: Custom activation functions
601+
- local: api/cache
602+
title: Caching methods
601603
- local: api/normalization
602604
title: Custom normalization layers
603605
- local: api/utilities

docs/source/en/api/cache.md

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
<!-- Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License. -->
11+
12+
# Caching methods
13+
14+
## Pyramid Attention Broadcast
15+
16+
[Pyramid Attention Broadcast](https://huggingface.co/papers/2408.12588) from Xuanlei Zhao, Xiaolong Jin, Kai Wang, Yang You.
17+
18+
Pyramid Attention Broadcast (PAB) is a method that speeds up inference in diffusion models by systematically skipping attention computations between successive inference steps and reusing cached attention states. The attention states are not very different between successive inference steps. The most prominent difference is in the spatial attention blocks, not as much in the temporal attention blocks, and finally the least in the cross attention blocks. Therefore, many cross attention computation blocks can be skipped, followed by the temporal and spatial attention blocks. By combining other techniques like sequence parallelism and classifier-free guidance parallelism, PAB achieves near real-time video generation.
19+
20+
Enable PAB with [`~PyramidAttentionBroadcastConfig`] on any pipeline. For some benchmarks, refer to [this](https://github.com/huggingface/diffusers/pull/9562) pull request.
21+
22+
```python
23+
import torch
24+
from diffusers import CogVideoXPipeline, PyramidAttentionBroadcastConfig
25+
26+
pipe = CogVideoXPipeline.from_pretrained("THUDM/CogVideoX-5b", torch_dtype=torch.bfloat16)
27+
pipe.to("cuda")
28+
29+
# Increasing the value of `spatial_attention_timestep_skip_range[0]` or decreasing the value of
30+
# `spatial_attention_timestep_skip_range[1]` will decrease the interval in which pyramid attention
31+
# broadcast is active, leader to slower inference speeds. However, large intervals can lead to
32+
# poorer quality of generated videos.
33+
config = PyramidAttentionBroadcastConfig(
34+
spatial_attention_block_skip_range=2,
35+
spatial_attention_timestep_skip_range=(100, 800),
36+
current_timestep_callback=lambda: pipe.current_timestep,
37+
)
38+
pipe.transformer.enable_cache(config)
39+
```
40+
41+
### CacheMixin
42+
43+
[[autodoc]] CacheMixin
44+
45+
### PyramidAttentionBroadcastConfig
46+
47+
[[autodoc]] PyramidAttentionBroadcastConfig
48+
49+
[[autodoc]] apply_pyramid_attention_broadcast

docs/source/en/api/pipelines/flux.md

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -309,6 +309,53 @@ image.save("output.png")
309309

310310
When unloading the Control LoRA weights, call `pipe.unload_lora_weights(reset_to_overwritten_params=True)` to reset the `pipe.transformer` completely back to its original form. The resultant pipeline can then be used with methods like [`DiffusionPipeline.from_pipe`]. More details about this argument are available in [this PR](https://github.com/huggingface/diffusers/pull/10397).
311311

312+
## IP-Adapter
313+
314+
<Tip>
315+
316+
Check out [IP-Adapter](../../../using-diffusers/ip_adapter) to learn more about how IP-Adapters work.
317+
318+
</Tip>
319+
320+
An IP-Adapter lets you prompt Flux with images, in addition to the text prompt. This is especially useful when describing complex concepts that are difficult to articulate through text alone and you have reference images.
321+
322+
```python
323+
import torch
324+
from diffusers import FluxPipeline
325+
from diffusers.utils import load_image
326+
327+
pipe = FluxPipeline.from_pretrained(
328+
"black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16
329+
).to("cuda")
330+
331+
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flux_ip_adapter_input.jpg").resize((1024, 1024))
332+
333+
pipe.load_ip_adapter(
334+
"XLabs-AI/flux-ip-adapter",
335+
weight_name="ip_adapter.safetensors",
336+
image_encoder_pretrained_model_name_or_path="openai/clip-vit-large-patch14"
337+
)
338+
pipe.set_ip_adapter_scale(1.0)
339+
340+
image = pipe(
341+
width=1024,
342+
height=1024,
343+
prompt="wearing sunglasses",
344+
negative_prompt="",
345+
true_cfg=4.0,
346+
generator=torch.Generator().manual_seed(4444),
347+
ip_adapter_image=image,
348+
).images[0]
349+
350+
image.save('flux_ip_adapter_output.jpg')
351+
```
352+
353+
<div class="justify-center">
354+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flux_ip_adapter_output.jpg"/>
355+
<figcaption class="mt-2 text-sm text-center text-gray-500">IP-Adapter examples with prompt "wearing sunglasses"</figcaption>
356+
</div>
357+
358+
312359
## Running FP16 inference
313360

314361
Flux can generate high-quality images with FP16 (i.e. to accelerate inference on Turing/Volta GPUs) but produces different outputs compared to FP32/BF16. The issue is that some activations in the text encoders have to be clipped when running in FP16, which affects the overall image. Forcing text encoders to run with FP32 inference thus removes this output difference. See [here](https://github.com/huggingface/diffusers/pull/9097#issuecomment-2272292516) for details.

docs/source/en/api/pipelines/mochi.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ export_to_video(frames, "mochi.mp4", fps=30)
115115

116116
## Reproducing the results from the Genmo Mochi repo
117117

118-
The [Genmo Mochi implementation](https://github.com/genmoai/mochi/tree/main) uses different precision values for each stage in the inference process. The text encoder and VAE use `torch.float32`, while the DiT uses `torch.bfloat16` with the [attention kernel](https://pytorch.org/docs/stable/generated/torch.nn.attention.sdpa_kernel.html#torch.nn.attention.sdpa_kernel) set to `EFFICIENT_ATTENTION`. Diffusers pipelines currently do not support setting different `dtypes` for different stages of the pipeline. In order to run inference in the same way as the the original implementation, please refer to the following example.
118+
The [Genmo Mochi implementation](https://github.com/genmoai/mochi/tree/main) uses different precision values for each stage in the inference process. The text encoder and VAE use `torch.float32`, while the DiT uses `torch.bfloat16` with the [attention kernel](https://pytorch.org/docs/stable/generated/torch.nn.attention.sdpa_kernel.html#torch.nn.attention.sdpa_kernel) set to `EFFICIENT_ATTENTION`. Diffusers pipelines currently do not support setting different `dtypes` for different stages of the pipeline. In order to run inference in the same way as the original implementation, please refer to the following example.
119119

120120
<Tip>
121121
The original Mochi implementation zeros out empty prompts. However, enabling this option and placing the entire pipeline under autocast can lead to numerical overflows with the T5 text encoder.

docs/source/en/api/utilities.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,3 +41,7 @@ Utility and helper functions for working with 🤗 Diffusers.
4141
## randn_tensor
4242

4343
[[autodoc]] utils.torch_utils.randn_tensor
44+
45+
## apply_layerwise_casting
46+
47+
[[autodoc]] hooks.layerwise_casting.apply_layerwise_casting

docs/source/en/installation.md

Lines changed: 34 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -23,32 +23,60 @@ You should install 🤗 Diffusers in a [virtual environment](https://docs.python
2323
If you're unfamiliar with Python virtual environments, take a look at this [guide](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/).
2424
A virtual environment makes it easier to manage different projects and avoid compatibility issues between dependencies.
2525

26-
Start by creating a virtual environment in your project directory:
26+
Create a virtual environment with Python or [uv](https://docs.astral.sh/uv/) (refer to [Installation](https://docs.astral.sh/uv/getting-started/installation/) for installation instructions), a fast Rust-based Python package and project manager.
27+
28+
<hfoptions id="install">
29+
<hfoption id="uv">
2730

2831
```bash
29-
python -m venv .env
32+
uv venv my-env
33+
source my-env/bin/activate
3034
```
3135

32-
Activate the virtual environment:
36+
</hfoption>
37+
<hfoption id="Python">
3338

3439
```bash
35-
source .env/bin/activate
40+
python -m venv my-env
41+
source my-env/bin/activate
3642
```
3743

38-
You should also install 🤗 Transformers because 🤗 Diffusers relies on its models:
44+
</hfoption>
45+
</hfoptions>
46+
47+
You should also install 🤗 Transformers because 🤗 Diffusers relies on its models.
3948

4049

4150
<frameworkcontent>
4251
<pt>
43-
Note - PyTorch only supports Python 3.8 - 3.11 on Windows.
52+
53+
PyTorch only supports Python 3.8 - 3.11 on Windows. Install Diffusers with uv.
54+
55+
```bash
56+
uv install diffusers["torch"] transformers
57+
```
58+
59+
You can also install Diffusers with pip.
60+
4461
```bash
4562
pip install diffusers["torch"] transformers
4663
```
64+
4765
</pt>
4866
<jax>
67+
68+
Install Diffusers with uv.
69+
70+
```bash
71+
uv pip install diffusers["flax"] transformers
72+
```
73+
74+
You can also install Diffusers with pip.
75+
4976
```bash
5077
pip install diffusers["flax"] transformers
5178
```
79+
5280
</jax>
5381
</frameworkcontent>
5482

docs/source/en/optimization/memory.md

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -158,6 +158,43 @@ In order to properly offload models after they're called, it is required to run
158158

159159
</Tip>
160160

161+
## FP8 layerwise weight-casting
162+
163+
PyTorch supports `torch.float8_e4m3fn` and `torch.float8_e5m2` as weight storage dtypes, but they can't be used for computation in many different tensor operations due to unimplemented kernel support. However, you can use these dtypes to store model weights in fp8 precision and upcast them on-the-fly when the layers are used in the forward pass. This is known as layerwise weight-casting.
164+
165+
Typically, inference on most models is done with `torch.float16` or `torch.bfloat16` weight/computation precision. Layerwise weight-casting cuts down the memory footprint of the model weights by approximately half.
166+
167+
```python
168+
import torch
169+
from diffusers import CogVideoXPipeline, CogVideoXTransformer3DModel
170+
from diffusers.utils import export_to_video
171+
172+
model_id = "THUDM/CogVideoX-5b"
173+
174+
# Load the model in bfloat16 and enable layerwise casting
175+
transformer = CogVideoXTransformer3DModel.from_pretrained(model_id, subfolder="transformer", torch_dtype=torch.bfloat16)
176+
transformer.enable_layerwise_casting(storage_dtype=torch.float8_e4m3fn, compute_dtype=torch.bfloat16)
177+
178+
# Load the pipeline
179+
pipe = CogVideoXPipeline.from_pretrained(model_id, transformer=transformer, torch_dtype=torch.bfloat16)
180+
pipe.to("cuda")
181+
182+
prompt = (
183+
"A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. "
184+
"The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other "
185+
"pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, "
186+
"casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. "
187+
"The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical "
188+
"atmosphere of this unique musical performance."
189+
)
190+
video = pipe(prompt=prompt, guidance_scale=6, num_inference_steps=50).frames[0]
191+
export_to_video(video, "output.mp4", fps=8)
192+
```
193+
194+
In the above example, layerwise casting is enabled on the transformer component of the pipeline. By default, certain layers are skipped from the FP8 weight casting because it can lead to significant degradation of generation quality. The normalization and modulation related weight parameters are also skipped by default.
195+
196+
However, you gain more control and flexibility by directly utilizing the [`~hooks.layerwise_casting.apply_layerwise_casting`] function instead of [`~ModelMixin.enable_layerwise_casting`].
197+
161198
## Channels-last memory format
162199

163200
The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model.

0 commit comments

Comments
 (0)