Skip to content

Commit b1237f7

Browse files
authored
Merge branch 'main' into dtype-map
2 parents 70ae4b6 + 75d7e5c commit b1237f7

File tree

18 files changed

+967
-70
lines changed

18 files changed

+967
-70
lines changed

docs/source/en/api/pipelines/hunyuan_video.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,8 @@ The following models are available for the image-to-video pipeline:
5050
| Model name | Description |
5151
|:---|:---|
5252
| [`Skywork/SkyReels-V1-Hunyuan-I2V`](https://huggingface.co/Skywork/SkyReels-V1-Hunyuan-I2V) | Skywork's custom finetune of HunyuanVideo (de-distilled). Performs best with `97x544x960` resolution. Performs best at `97x544x960` resolution, `guidance_scale=1.0`, `true_cfg_scale=6.0` and a negative prompt. |
53-
| [`hunyuanvideo-community/HunyuanVideo-I2V`](https://huggingface.co/hunyuanvideo-community/HunyuanVideo-I2V) | Tecent's official HunyuanVideo I2V model. Performs best at resolutions of 480, 720, 960, 1280. A higher `shift` value when initializing the scheduler is recommended (good values are between 7 and 20) |
53+
| [`hunyuanvideo-community/HunyuanVideo-I2V-33ch`](https://huggingface.co/hunyuanvideo-community/HunyuanVideo-I2V) | Tecent's official HunyuanVideo 33-channel I2V model. Performs best at resolutions of 480, 720, 960, 1280. A higher `shift` value when initializing the scheduler is recommended (good values are between 7 and 20). |
54+
| [`hunyuanvideo-community/HunyuanVideo-I2V`](https://huggingface.co/hunyuanvideo-community/HunyuanVideo-I2V) | Tecent's official HunyuanVideo 16-channel I2V model. Performs best at resolutions of 480, 720, 960, 1280. A higher `shift` value when initializing the scheduler is recommended (good values are between 7 and 20) |
5455

5556
## Quantization
5657

docs/source/en/api/pipelines/wan.md

Lines changed: 354 additions & 11 deletions
Large diffs are not rendered by default.

docs/source/en/optimization/memory.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -198,6 +198,18 @@ export_to_video(video, "output.mp4", fps=8)
198198

199199
Group offloading (for CUDA devices with support for asynchronous data transfer streams) overlaps data transfer and computation to reduce the overall execution time compared to sequential offloading. This is enabled using layer prefetching with CUDA streams. The next layer to be executed is loaded onto the accelerator device while the current layer is being executed - this increases the memory requirements slightly. Group offloading also supports leaf-level offloading (equivalent to sequential CPU offloading) but can be made much faster when using streams.
200200

201+
<Tip>
202+
203+
- Group offloading may not work with all models out-of-the-box. If the forward implementations of the model contain weight-dependent device-casting of inputs, it may clash with the offloading mechanism's handling of device-casting.
204+
- The `offload_type` parameter can be set to either `block_level` or `leaf_level`. `block_level` offloads groups of `torch::nn::ModuleList` or `torch::nn:Sequential` modules based on a configurable attribute `num_blocks_per_group`. For example, if you set `num_blocks_per_group=2` on a standard transformer model containing 40 layers, it will onload/offload 2 layers at a time for a total of 20 onload/offloads. This drastically reduces the VRAM requirements. `leaf_level` offloads individual layers at the lowest level, which is equivalent to sequential offloading. However, unlike sequential offloading, group offloading can be made much faster when using streams, with minimal compromise to end-to-end generation time.
205+
- The `use_stream` parameter can be used with CUDA devices to enable prefetching layers for onload. It defaults to `False`. Layer prefetching allows overlapping computation and data transfer of model weights, which drastically reduces the overall execution time compared to other offloading methods. However, it can increase the CPU RAM usage significantly. Ensure that available CPU RAM that is at least twice the size of the model when setting `use_stream=True`. You can find more information about CUDA streams [here](https://pytorch.org/docs/stable/generated/torch.cuda.Stream.html)
206+
- If specifying `use_stream=True` on VAEs with tiling enabled, make sure to do a dummy forward pass (possibly with dummy inputs) before the actual inference to avoid device-mismatch errors. This may not work on all implementations. Please open an issue if you encounter any problems.
207+
- The parameter `low_cpu_mem_usage` can be set to `True` to reduce CPU memory usage when using streams for group offloading. This is useful when the CPU memory is the bottleneck, but it may counteract the benefits of using streams and increase the overall execution time. The CPU memory savings come from creating pinned-tensors on-the-fly instead of pre-pinning them. This parameter is better suited for using `leaf_level` offloading.
208+
209+
For more information about available parameters and an explanation of how group offloading works, refer to [`~hooks.group_offloading.apply_group_offloading`].
210+
211+
</Tip>
212+
201213
## FP8 layerwise weight-casting
202214

203215
PyTorch supports `torch.float8_e4m3fn` and `torch.float8_e5m2` as weight storage dtypes, but they can't be used for computation in many different tensor operations due to unimplemented kernel support. However, you can use these dtypes to store model weights in fp8 precision and upcast them on-the-fly when the layers are used in the forward pass. This is known as layerwise weight-casting.
@@ -235,6 +247,14 @@ In the above example, layerwise casting is enabled on the transformer component
235247

236248
However, you gain more control and flexibility by directly utilizing the [`~hooks.layerwise_casting.apply_layerwise_casting`] function instead of [`~ModelMixin.enable_layerwise_casting`].
237249

250+
<Tip>
251+
252+
- Layerwise casting may not work with all models out-of-the-box. Sometimes, the forward implementations of the model might contain internal typecasting of weight values. Such implementations are not supported due to the currently simplistic implementation of layerwise casting, which assumes that the forward pass is independent of the weight precision and that the input dtypes are always in `compute_dtype`. An example of an incompatible implementation can be found [here](https://github.com/huggingface/transformers/blob/7f5077e53682ca855afc826162b204ebf809f1f9/src/transformers/models/t5/modeling_t5.py#L294-L299).
253+
- Layerwise casting may fail on custom modeling implementations that make use of [PEFT](https://github.com/huggingface/peft) layers. Some minimal checks to handle this case is implemented but is not extensively tested or guaranteed to work in all cases.
254+
- It can be also be applied partially to specific layers of a model. Partially applying layerwise casting can either be done manually by calling the `apply_layerwise_casting` function on specific internal modules, or by specifying the `skip_modules_pattern` and `skip_modules_classes` parameters for a root module. These parameters are particularly useful for layers such as normalization and modulation.
255+
256+
</Tip>
257+
238258
## Channels-last memory format
239259

240260
The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model.

docs/source/ko/training/controlnet.md

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -66,12 +66,6 @@ from accelerate.utils import write_basic_config
6666
write_basic_config()
6767
```
6868

69-
## 원을 채우는 데이터셋
70-
71-
원본 데이터셋은 ControlNet [repo](https://huggingface.co/lllyasviel/ControlNet/blob/main/training/fill50k.zip)에 올라와있지만, 우리는 [여기](https://huggingface.co/datasets/fusing/fill50k)에 새롭게 다시 올려서 🤗 Datasets 과 호환가능합니다. 그래서 학습 스크립트 상에서 데이터 불러오기를 다룰 수 있습니다.
72-
73-
우리의 학습 예시는 원래 ControlNet의 학습에 쓰였던 [`stable-diffusion-v1-5/stable-diffusion-v1-5`](https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5)을 사용합니다. 그렇지만 ControlNet은 대응되는 어느 Stable Diffusion 모델([`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4)) 혹은 [`stabilityai/stable-diffusion-2-1`](https://huggingface.co/stabilityai/stable-diffusion-2-1)의 증가를 위해 학습될 수 있습니다.
74-
7569
자체 데이터셋을 사용하기 위해서는 [학습을 위한 데이터셋 생성하기](create_dataset) 가이드를 확인하세요.
7670

7771
## 학습

scripts/convert_hunyuan_video_to_diffusers.py

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -160,8 +160,9 @@ def remap_single_transformer_blocks_(key, state_dict):
160160
"pooled_projection_dim": 768,
161161
"rope_theta": 256.0,
162162
"rope_axes_dim": (16, 56, 56),
163+
"image_condition_type": None,
163164
},
164-
"HYVideo-T/2-I2V": {
165+
"HYVideo-T/2-I2V-33ch": {
165166
"in_channels": 16 * 2 + 1,
166167
"out_channels": 16,
167168
"num_attention_heads": 24,
@@ -178,6 +179,26 @@ def remap_single_transformer_blocks_(key, state_dict):
178179
"pooled_projection_dim": 768,
179180
"rope_theta": 256.0,
180181
"rope_axes_dim": (16, 56, 56),
182+
"image_condition_type": "latent_concat",
183+
},
184+
"HYVideo-T/2-I2V-16ch": {
185+
"in_channels": 16,
186+
"out_channels": 16,
187+
"num_attention_heads": 24,
188+
"attention_head_dim": 128,
189+
"num_layers": 20,
190+
"num_single_layers": 40,
191+
"num_refiner_layers": 2,
192+
"mlp_ratio": 4.0,
193+
"patch_size": 2,
194+
"patch_size_t": 1,
195+
"qk_norm": "rms_norm",
196+
"guidance_embeds": True,
197+
"text_embed_dim": 4096,
198+
"pooled_projection_dim": 768,
199+
"rope_theta": 256.0,
200+
"rope_axes_dim": (16, 56, 56),
201+
"image_condition_type": "token_replace",
181202
},
182203
}
183204

src/diffusers/hooks/group_offloading.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -331,7 +331,7 @@ def apply_group_offloading(
331331
num_blocks_per_group: Optional[int] = None,
332332
non_blocking: bool = False,
333333
use_stream: bool = False,
334-
low_cpu_mem_usage=False,
334+
low_cpu_mem_usage: bool = False,
335335
) -> None:
336336
r"""
337337
Applies group offloading to the internal layers of a torch.nn.Module. To understand what group offloading is, and
@@ -378,6 +378,10 @@ def apply_group_offloading(
378378
use_stream (`bool`, defaults to `False`):
379379
If True, offloading and onloading is done asynchronously using a CUDA stream. This can be useful for
380380
overlapping computation and data transfer.
381+
low_cpu_mem_usage (`bool`, defaults to `False`):
382+
If True, the CPU memory usage is minimized by pinning tensors on-the-fly instead of pre-pinning them. This
383+
option only matters when using streamed CPU offloading (i.e. `use_stream=True`). This can be useful when
384+
the CPU memory is a bottleneck but may counteract the benefits of using streams.
381385
382386
Example:
383387
```python

src/diffusers/loaders/peft.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -307,6 +307,9 @@ def load_lora_adapter(self, pretrained_model_name_or_path_or_dict, prefix="trans
307307
try:
308308
inject_adapter_in_model(lora_config, self, adapter_name=adapter_name, **peft_kwargs)
309309
incompatible_keys = set_peft_model_state_dict(self, state_dict, adapter_name, **peft_kwargs)
310+
# Set peft config loaded flag to True if module has been successfully injected and incompatible keys retrieved
311+
if not self._hf_peft_config_loaded:
312+
self._hf_peft_config_loaded = True
310313
except Exception as e:
311314
# In case `inject_adapter_in_model()` was unsuccessful even before injecting the `peft_config`.
312315
if hasattr(self, "peft_config"):

src/diffusers/loaders/single_file_model.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -282,6 +282,7 @@ def from_single_file(cls, pretrained_model_link_or_path_or_dict: Optional[str] =
282282
if quantization_config is not None:
283283
hf_quantizer = DiffusersAutoQuantizer.from_config(quantization_config)
284284
hf_quantizer.validate_environment()
285+
torch_dtype = hf_quantizer.update_torch_dtype(torch_dtype)
285286

286287
else:
287288
hf_quantizer = None

src/diffusers/models/transformers/latte_transformer_3d.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -273,7 +273,7 @@ def forward(
273273
hidden_states = hidden_states.reshape(-1, hidden_states.shape[-2], hidden_states.shape[-1])
274274

275275
if i == 0 and num_frame > 1:
276-
hidden_states = hidden_states + self.temp_pos_embed
276+
hidden_states = hidden_states + self.temp_pos_embed.to(hidden_states.dtype)
277277

278278
if torch.is_grad_enabled() and self.gradient_checkpointing:
279279
hidden_states = self._gradient_checkpointing_func(

src/diffusers/models/transformers/sana_transformer.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -326,6 +326,10 @@ class SanaTransformer2DModel(ModelMixin, ConfigMixin, PeftAdapterMixin, FromOrig
326326
Whether to use elementwise affinity in the normalization layer.
327327
norm_eps (`float`, defaults to `1e-6`):
328328
The epsilon value for the normalization layer.
329+
qk_norm (`str`, *optional*, defaults to `None`):
330+
The normalization to use for the query and key.
331+
timestep_scale (`float`, defaults to `1.0`):
332+
The scale to use for the timesteps.
329333
"""
330334

331335
_supports_gradient_checkpointing = True
@@ -355,6 +359,7 @@ def __init__(
355359
guidance_embeds: bool = False,
356360
guidance_embeds_scale: float = 0.1,
357361
qk_norm: Optional[str] = None,
362+
timestep_scale: float = 1.0,
358363
) -> None:
359364
super().__init__()
360365

0 commit comments

Comments
 (0)