You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2
+
3
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
+
the License. You may obtain a copy of the License at
5
+
6
+
http://www.apache.org/licenses/LICENSE-2.0
7
+
8
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
+
specific language governing permissions and limitations under the License.
11
+
-->
12
+
13
+
# SD3Transformer2D
14
+
15
+
This class is useful when *only* loading weights into a [`SD3Transformer2DModel`]. If you need to load weights into the text encoder or a text encoder and SD3Transformer2DModel, check [`SD3LoraLoaderMixin`](lora#diffusers.loaders.SD3LoraLoaderMixin) class instead.
16
+
17
+
The [`SD3Transformer2DLoadersMixin`] class currently only loads IP-Adapter weights, but will be used in the future to save weights and load LoRAs.
18
+
19
+
<Tip>
20
+
21
+
To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide.
Copy file name to clipboardExpand all lines: docs/source/en/api/pipelines/flux.md
+37Lines changed: 37 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -268,6 +268,43 @@ images = pipe(
268
268
images[0].save("flux-redux.png")
269
269
```
270
270
271
+
## Combining Flux Turbo LoRAs with Flux Control, Fill, and Redux
272
+
273
+
We can combine Flux Turbo LoRAs with Flux Control and other pipelines like Fill and Redux to enable few-steps' inference. The example below shows how to do that for Flux Control LoRA for depth and turbo LoRA from [`ByteDance/Hyper-SD`](https://hf.co/ByteDance/Hyper-SD).
Flux can generate high-quality images with FP16 (i.e. to accelerate inference on Turing/Volta GPUs) but produces different outputs compared to FP32/BF16. The issue is that some activations in the text encoders have to be clipped when running in FP16, which affects the overall image. Forcing text encoders to run with FP32 inference thus removes this output difference. See [here](https://github.com/huggingface/diffusers/pull/9097#issuecomment-2272292516) for details.
prompt ="A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage"
Make sure to read the [documentation on GGUF](../../quantization/gguf) to learn more about our GGUF support.
102
+
64
103
Refer to [this section](https://huggingface.co/docs/diffusers/main/en/api/pipelines/cogvideox#memory-optimization) to learn more about optimizing memory consumption.
Copy file name to clipboardExpand all lines: docs/source/en/api/pipelines/mochi.md
+196-1Lines changed: 196 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@
13
13
# limitations under the License.
14
14
-->
15
15
16
-
# Mochi
16
+
# Mochi 1 Preview
17
17
18
18
[Mochi 1 Preview](https://huggingface.co/genmo/mochi-1-preview) from Genmo.
19
19
@@ -25,6 +25,201 @@ Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.m
25
25
26
26
</Tip>
27
27
28
+
## Generating videos with Mochi-1 Preview
29
+
30
+
The following example will download the full precision `mochi-1-preview` weights and produce the highest quality results but will require at least 42GB VRAM to run.
prompt ="Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k."
44
+
45
+
with torch.autocast("cuda", torch.bfloat16, cache_enabled=False):
46
+
frames = pipe(prompt, num_frames=85).frames[0]
47
+
48
+
export_to_video(frames, "mochi.mp4", fps=30)
49
+
```
50
+
51
+
## Using a lower precision variant to save memory
52
+
53
+
The following example will use the `bfloat16` variant of the model and requires 22GB VRAM to run. There is a slight drop in the quality of the generated video as a result.
prompt ="Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k."
67
+
frames = pipe(prompt, num_frames=85).frames[0]
68
+
69
+
export_to_video(frames, "mochi.mp4", fps=30)
70
+
```
71
+
72
+
## Reproducing the results from the Genmo Mochi repo
73
+
74
+
The [Genmo Mochi implementation](https://github.com/genmoai/mochi/tree/main) uses different precision values for each stage in the inference process. The text encoder and VAE use `torch.float32`, while the DiT uses `torch.bfloat16` with the [attention kernel](https://pytorch.org/docs/stable/generated/torch.nn.attention.sdpa_kernel.html#torch.nn.attention.sdpa_kernel) set to `EFFICIENT_ATTENTION`. Diffusers pipelines currently do not support setting different `dtypes` for different stages of the pipeline. In order to run inference in the same way as the the original implementation, please refer to the following example.
75
+
76
+
<Tip>
77
+
The original Mochi implementation zeros out empty prompts. However, enabling this option and placing the entire pipeline under autocast can lead to numerical overflows with the T5 text encoder.
78
+
79
+
When enabling `force_zeros_for_empty_prompt`, it is recommended to run the text encoding step outside the autocast context in full precision.
80
+
</Tip>
81
+
82
+
<Tip>
83
+
Decoding the latents in full precision is very memory intensive. You will need at least 70GB VRAM to generate the 163 frames in this example. To reduce memory, either reduce the number of frames or run the decoding step in `torch.bfloat16`.
84
+
</Tip>
85
+
86
+
```python
87
+
import torch
88
+
from torch.nn.attention import SDPBackend, sdpa_kernel
89
+
90
+
from diffusers import MochiPipeline
91
+
from diffusers.utils import export_to_video
92
+
from diffusers.video_processor import VideoProcessor
video = pipe.vae.decode(frames.to(pipe.vae.dtype), return_dict=False)[0]
138
+
139
+
video = video_processor.postprocess_video(video)[0]
140
+
export_to_video(video, "mochi.mp4", fps=30)
141
+
```
142
+
143
+
## Running inference with multiple GPUs
144
+
145
+
It is possible to split the large Mochi transformer across multiple GPUs using the `device_map` and `max_memory` options in `from_pretrained`. In the following example we split the model across two GPUs, each with 24GB of VRAM.
146
+
147
+
```python
148
+
import torch
149
+
from diffusers import MochiPipeline, MochiTransformer3DModel
0 commit comments