Skip to content

Commit c3b1e2f

Browse files
authored
Merge branch 'main' into fix_flax_use_memory_efficient_attention
2 parents d10fa09 + 4d633bf commit c3b1e2f

File tree

22 files changed

+2068
-30
lines changed

22 files changed

+2068
-30
lines changed

.github/workflows/nightly_tests.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ jobs:
5959
runs-on: [single-gpu, nvidia-gpu, t4, ci]
6060
container:
6161
image: diffusers/diffusers-pytorch-cuda
62-
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
62+
options: --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface/diffusers:/mnt/cache/ --gpus 0
6363
steps:
6464
- name: Checkout diffusers
6565
uses: actions/checkout@v3

.github/workflows/push_tests.yml

Lines changed: 1 addition & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ jobs:
6262
runs-on: [single-gpu, nvidia-gpu, t4, ci]
6363
container:
6464
image: diffusers/diffusers-pytorch-cuda
65-
options: --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface/diffusers:/mnt/cache/ --gpus 0 --privileged
65+
options: --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface/diffusers:/mnt/cache/ --gpus 0
6666
steps:
6767
- name: Checkout diffusers
6868
uses: actions/checkout@v3
@@ -71,12 +71,6 @@ jobs:
7171
- name: NVIDIA-SMI
7272
run: |
7373
nvidia-smi
74-
- name: Tailscale
75-
uses: huggingface/tailscale-action@v1
76-
with:
77-
authkey: ${{ secrets.TAILSCALE_SSH_AUTHKEY }}
78-
slackChannel: ${{ secrets.SLACK_CIFEEDBACK_CHANNEL }}
79-
slackToken: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
8074
- name: Install dependencies
8175
run: |
8276
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
@@ -95,18 +89,11 @@ jobs:
9589
-s -v -k "not Flax and not Onnx" \
9690
--make-reports=tests_pipeline_${{ matrix.module }}_cuda \
9791
tests/pipelines/${{ matrix.module }}
98-
- name: Tailscale Wait
99-
if: ${{ failure() || runner.debug == '1' }}
100-
uses: huggingface/tailscale-action@v1
101-
with:
102-
waitForSSH: true
103-
authkey: ${{ secrets.TAILSCALE_SSH_AUTHKEY }}
10492
- name: Failure short reports
10593
if: ${{ failure() }}
10694
run: |
10795
cat reports/tests_pipeline_${{ matrix.module }}_cuda_stats.txt
10896
cat reports/tests_pipeline_${{ matrix.module }}_cuda_failures_short.txt
109-
11097
- name: Test suite reports artifacts
11198
if: ${{ always() }}
11299
uses: actions/upload-artifact@v2

.github/workflows/ssh-runner.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ jobs:
2525
runs-on: [single-gpu, nvidia-gpu, "${{ github.event.inputs.runner_type }}", ci]
2626
container:
2727
image: ${{ github.event.inputs.docker_image }}
28-
options: --gpus all --privileged --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
28+
options: --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface/diffusers:/mnt/cache/ --gpus 0 --privileged
2929

3030
steps:
3131
- name: Checkout diffusers

docs/source/en/_toctree.yml

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -242,6 +242,8 @@
242242
title: PixArtTransformer2D
243243
- local: api/models/dit_transformer2d
244244
title: DiTTransformer2D
245+
- local: api/models/hunyuan_transformer_2d
246+
title: HunyuanDiT2DModel
245247
- local: api/models/transformer_temporal
246248
title: Transformer Temporal
247249
- local: api/models/prior_transformer
@@ -289,6 +291,8 @@
289291
title: DiffEdit
290292
- local: api/pipelines/dit
291293
title: DiT
294+
- local: api/pipelines/hunyuandit
295+
title: Hunyuan-DiT
292296
- local: api/pipelines/i2vgenxl
293297
title: I2VGen-XL
294298
- local: api/pipelines/pix2pix
@@ -457,4 +461,4 @@
457461
title: Video Processor
458462
title: Internal classes
459463
isExpanded: false
460-
title: API
464+
title: API
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# HunyuanDiT2DModel
14+
15+
A Diffusion Transformer model for 2D data from [Hunyuan-DiT](https://github.com/Tencent/HunyuanDiT).
16+
17+
## HunyuanDiT2DModel
18+
19+
[[autodoc]] HunyuanDiT2DModel
20+
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# Hunyuan-DiT
14+
![chinese elements understanding](https://github.com/gnobitab/diffusers-hunyuan/assets/1157982/39b99036-c3cb-4f16-bb1a-40ec25eda573)
15+
16+
[Hunyuan-DiT : A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding](https://arxiv.org/abs/2405.08748)] from Tencent Hunyuan.
17+
18+
The abstract from the paper is:
19+
20+
*We present Hunyuan-DiT, a text-to-image diffusion transformer with fine-grained understanding of both English and Chinese. To construct Hunyuan-DiT, we carefully design the transformer structure, text encoder, and positional encoding. We also build from scratch a whole data pipeline to update and evaluate data for iterative model optimization. For fine-grained language understanding, we train a Multimodal Large Language Model to refine the captions of the images. Finally, Hunyuan-DiT can perform multi-turn multimodal dialogue with users, generating and refining images according to the context. Through our holistic human evaluation protocol with more than 50 professional human evaluators, Hunyuan-DiT sets a new state-of-the-art in Chinese-to-image generation compared with other open-source models.*
21+
22+
23+
You can find the original codebase at [Tencent/HunyuanDiT](https://github.com/Tencent/HunyuanDiT) and all the available checkpoints at [Tencent-Hunyuan](https://huggingface.co/Tencent-Hunyuan/HunyuanDiT).
24+
25+
**Highlights**: HunyuanDiT supports Chinese/English-to-image, multi-resolution generation.
26+
27+
HunyuanDiT has the following components:
28+
* It uses a diffusion transformer as the backbone
29+
* It combines two text encoders, a bilingual CLIP and a multilingual T5 encoder
30+
31+
32+
## HunyuanDiTPipeline
33+
34+
[[autodoc]] HunyuanDiTPipeline
35+
- all
36+
- __call__
37+

src/diffusers/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,7 @@
8383
"ControlNetModel",
8484
"ControlNetXSAdapter",
8585
"DiTTransformer2DModel",
86+
"HunyuanDiT2DModel",
8687
"I2VGenXLUNet",
8788
"Kandinsky3UNet",
8889
"ModelMixin",
@@ -229,6 +230,7 @@
229230
"BlipDiffusionPipeline",
230231
"CLIPImageProjection",
231232
"CycleDiffusionPipeline",
233+
"HunyuanDiTPipeline",
232234
"I2VGenXLPipeline",
233235
"IFImg2ImgPipeline",
234236
"IFImg2ImgSuperResolutionPipeline",
@@ -487,6 +489,7 @@
487489
ControlNetModel,
488490
ControlNetXSAdapter,
489491
DiTTransformer2DModel,
492+
HunyuanDiT2DModel,
490493
I2VGenXLUNet,
491494
Kandinsky3UNet,
492495
ModelMixin,
@@ -611,6 +614,7 @@
611614
AudioLDMPipeline,
612615
CLIPImageProjection,
613616
CycleDiffusionPipeline,
617+
HunyuanDiTPipeline,
614618
I2VGenXLPipeline,
615619
IFImg2ImgPipeline,
616620
IFImg2ImgSuperResolutionPipeline,

src/diffusers/models/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -38,6 +38,7 @@
3838
_import_structure["embeddings"] = ["ImageProjection"]
3939
_import_structure["modeling_utils"] = ["ModelMixin"]
4040
_import_structure["transformers.dit_transformer_2d"] = ["DiTTransformer2DModel"]
41+
_import_structure["transformers.hunyuan_transformer_2d"] = ["HunyuanDiT2DModel"]
4142
_import_structure["transformers.pixart_transformer_2d"] = ["PixArtTransformer2DModel"]
4243
_import_structure["transformers.prior_transformer"] = ["PriorTransformer"]
4344
_import_structure["transformers.t5_film_transformer"] = ["T5FilmDecoder"]
@@ -78,6 +79,7 @@
7879
from .transformers import (
7980
DiTTransformer2DModel,
8081
DualTransformer2DModel,
82+
HunyuanDiT2DModel,
8183
PixArtTransformer2DModel,
8284
PriorTransformer,
8385
T5FilmDecoder,

src/diffusers/models/activations.py

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,18 @@ def get_activation(act_fn: str) -> nn.Module:
5050
raise ValueError(f"Unsupported activation function: {act_fn}")
5151

5252

53+
class FP32SiLU(nn.Module):
54+
r"""
55+
SiLU activation function with input upcasted to torch.float32.
56+
"""
57+
58+
def __init__(self):
59+
super().__init__()
60+
61+
def forward(self, inputs: torch.Tensor) -> torch.Tensor:
62+
return F.silu(inputs.float(), inplace=False).to(inputs.dtype)
63+
64+
5365
class GELU(nn.Module):
5466
r"""
5567
GELU activation function with tanh approximation support with `approximate="tanh"`.

src/diffusers/models/attention_processor.py

Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,6 +103,7 @@ def __init__(
103103
upcast_softmax: bool = False,
104104
cross_attention_norm: Optional[str] = None,
105105
cross_attention_norm_num_groups: int = 32,
106+
qk_norm: Optional[str] = None,
106107
added_kv_proj_dim: Optional[int] = None,
107108
norm_num_groups: Optional[int] = None,
108109
spatial_norm_dim: Optional[int] = None,
@@ -161,6 +162,15 @@ def __init__(
161162
else:
162163
self.spatial_norm = None
163164

165+
if qk_norm is None:
166+
self.norm_q = None
167+
self.norm_k = None
168+
elif qk_norm == "layer_norm":
169+
self.norm_q = nn.LayerNorm(dim_head, eps=eps)
170+
self.norm_k = nn.LayerNorm(dim_head, eps=eps)
171+
else:
172+
raise ValueError(f"unknown qk_norm: {qk_norm}. Should be None or 'layer_norm'")
173+
164174
if cross_attention_norm is None:
165175
self.norm_cross = None
166176
elif cross_attention_norm == "layer_norm":
@@ -1426,6 +1436,104 @@ def __call__(
14261436
return hidden_states
14271437

14281438

1439+
class HunyuanAttnProcessor2_0:
1440+
r"""
1441+
Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0). This is
1442+
used in the HunyuanDiT model. It applies a s normalization layer and rotary embedding on query and key vector.
1443+
"""
1444+
1445+
def __init__(self):
1446+
if not hasattr(F, "scaled_dot_product_attention"):
1447+
raise ImportError("AttnProcessor2_0 requires PyTorch 2.0, to use it, please upgrade PyTorch to 2.0.")
1448+
1449+
def __call__(
1450+
self,
1451+
attn: Attention,
1452+
hidden_states: torch.Tensor,
1453+
encoder_hidden_states: Optional[torch.Tensor] = None,
1454+
attention_mask: Optional[torch.Tensor] = None,
1455+
temb: Optional[torch.Tensor] = None,
1456+
image_rotary_emb: Optional[torch.Tensor] = None,
1457+
) -> torch.Tensor:
1458+
from .embeddings import apply_rotary_emb
1459+
1460+
residual = hidden_states
1461+
if attn.spatial_norm is not None:
1462+
hidden_states = attn.spatial_norm(hidden_states, temb)
1463+
1464+
input_ndim = hidden_states.ndim
1465+
1466+
if input_ndim == 4:
1467+
batch_size, channel, height, width = hidden_states.shape
1468+
hidden_states = hidden_states.view(batch_size, channel, height * width).transpose(1, 2)
1469+
1470+
batch_size, sequence_length, _ = (
1471+
hidden_states.shape if encoder_hidden_states is None else encoder_hidden_states.shape
1472+
)
1473+
1474+
if attention_mask is not None:
1475+
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length, batch_size)
1476+
# scaled_dot_product_attention expects attention_mask shape to be
1477+
# (batch, heads, source_length, target_length)
1478+
attention_mask = attention_mask.view(batch_size, attn.heads, -1, attention_mask.shape[-1])
1479+
1480+
if attn.group_norm is not None:
1481+
hidden_states = attn.group_norm(hidden_states.transpose(1, 2)).transpose(1, 2)
1482+
1483+
query = attn.to_q(hidden_states)
1484+
1485+
if encoder_hidden_states is None:
1486+
encoder_hidden_states = hidden_states
1487+
elif attn.norm_cross:
1488+
encoder_hidden_states = attn.norm_encoder_hidden_states(encoder_hidden_states)
1489+
1490+
key = attn.to_k(encoder_hidden_states)
1491+
value = attn.to_v(encoder_hidden_states)
1492+
1493+
inner_dim = key.shape[-1]
1494+
head_dim = inner_dim // attn.heads
1495+
1496+
query = query.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
1497+
1498+
key = key.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
1499+
value = value.view(batch_size, -1, attn.heads, head_dim).transpose(1, 2)
1500+
1501+
if attn.norm_q is not None:
1502+
query = attn.norm_q(query)
1503+
if attn.norm_k is not None:
1504+
key = attn.norm_k(key)
1505+
1506+
# Apply RoPE if needed
1507+
if image_rotary_emb is not None:
1508+
query = apply_rotary_emb(query, image_rotary_emb)
1509+
if not attn.is_cross_attention:
1510+
key = apply_rotary_emb(key, image_rotary_emb)
1511+
1512+
# the output of sdp = (batch, num_heads, seq_len, head_dim)
1513+
# TODO: add support for attn.scale when we move to Torch 2.1
1514+
hidden_states = F.scaled_dot_product_attention(
1515+
query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False
1516+
)
1517+
1518+
hidden_states = hidden_states.transpose(1, 2).reshape(batch_size, -1, attn.heads * head_dim)
1519+
hidden_states = hidden_states.to(query.dtype)
1520+
1521+
# linear proj
1522+
hidden_states = attn.to_out[0](hidden_states)
1523+
# dropout
1524+
hidden_states = attn.to_out[1](hidden_states)
1525+
1526+
if input_ndim == 4:
1527+
hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, channel, height, width)
1528+
1529+
if attn.residual_connection:
1530+
hidden_states = hidden_states + residual
1531+
1532+
hidden_states = hidden_states / attn.rescale_output_factor
1533+
1534+
return hidden_states
1535+
1536+
14291537
class FusedAttnProcessor2_0:
14301538
r"""
14311539
Processor for implementing scaled dot-product attention (enabled by default if you're using PyTorch 2.0). It uses

0 commit comments

Comments
 (0)