Skip to content

Commit e8b2352

Browse files
authored
Merge branch 'main' into test-sana-lora-training
2 parents 31b1a8e + 02c777c commit e8b2352

File tree

72 files changed

+4046
-386
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

72 files changed

+4046
-386
lines changed

.github/workflows/push_tests_mps.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ jobs:
4646
shell: arch -arch arm64 bash {0}
4747
run: |
4848
${CONDA_RUN} python -m pip install --upgrade pip uv
49-
${CONDA_RUN} python -m uv pip install -e [quality,test]
49+
${CONDA_RUN} python -m uv pip install -e ".[quality,test]"
5050
${CONDA_RUN} python -m uv pip install torch torchvision torchaudio
5151
${CONDA_RUN} python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
5252
${CONDA_RUN} python -m uv pip install transformers --upgrade

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -238,6 +238,8 @@
238238
title: Textual Inversion
239239
- local: api/loaders/unet
240240
title: UNet
241+
- local: api/loaders/transformer_sd3
242+
title: SD3Transformer2D
241243
- local: api/loaders/peft
242244
title: PEFT
243245
title: Loaders

docs/source/en/api/attnprocessor.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -86,6 +86,8 @@ An attention processor is a class for applying different types of attention mech
8686

8787
[[autodoc]] models.attention_processor.IPAdapterAttnProcessor2_0
8888

89+
[[autodoc]] models.attention_processor.SD3IPAdapterJointAttnProcessor2_0
90+
8991
## JointAttnProcessor2_0
9092

9193
[[autodoc]] models.attention_processor.JointAttnProcessor2_0

docs/source/en/api/loaders/ip_adapter.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,12 @@ Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter [loading]
2424

2525
[[autodoc]] loaders.ip_adapter.IPAdapterMixin
2626

27+
## SD3IPAdapterMixin
28+
29+
[[autodoc]] loaders.ip_adapter.SD3IPAdapterMixin
30+
- all
31+
- is_ip_adapter_active
32+
2733
## IPAdapterMaskProcessor
2834

2935
[[autodoc]] image_processor.IPAdapterMaskProcessor
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# SD3Transformer2D
14+
15+
This class is useful when *only* loading weights into a [`SD3Transformer2DModel`]. If you need to load weights into the text encoder or a text encoder and SD3Transformer2DModel, check [`SD3LoraLoaderMixin`](lora#diffusers.loaders.SD3LoraLoaderMixin) class instead.
16+
17+
The [`SD3Transformer2DLoadersMixin`] class currently only loads IP-Adapter weights, but will be used in the future to save weights and load LoRAs.
18+
19+
<Tip>
20+
21+
To learn more about how to load LoRA weights, see the [LoRA](../../using-diffusers/loading_adapters#lora) loading guide.
22+
23+
</Tip>
24+
25+
## SD3Transformer2DLoadersMixin
26+
27+
[[autodoc]] loaders.transformer_sd3.SD3Transformer2DLoadersMixin
28+
- all
29+
- _load_ip_adapter_weights

docs/source/en/api/models/autoencoder_kl_hunyuan_video.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ The model can be loaded with the following code snippet.
1818
```python
1919
from diffusers import AutoencoderKLHunyuanVideo
2020

21-
vae = AutoencoderKLHunyuanVideo.from_pretrained("tencent/HunyuanVideo", torch_dtype=torch.float16)
21+
vae = AutoencoderKLHunyuanVideo.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder="vae", torch_dtype=torch.float16)
2222
```
2323

2424
## AutoencoderKLHunyuanVideo

docs/source/en/api/models/hunyuan_video_transformer_3d.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ The model can be loaded with the following code snippet.
1818
```python
1919
from diffusers import HunyuanVideoTransformer3DModel
2020

21-
transformer = HunyuanVideoTransformer3DModel.from_pretrained("tencent/HunyuanVideo", torch_dtype=torch.bfloat16)
21+
transformer = HunyuanVideoTransformer3DModel.from_pretrained("hunyuanvideo-community/HunyuanVideo", subfolder="transformer", torch_dtype=torch.bfloat16)
2222
```
2323

2424
## HunyuanVideoTransformer3DModel

docs/source/en/api/models/sana_transformer2d.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ The model can be loaded with the following code snippet.
2222
```python
2323
from diffusers import SanaTransformer2DModel
2424

25-
transformer = SanaTransformer2DModel.from_pretrained("Efficient-Large-Model/Sana_1600M_1024px_diffusers", subfolder="transformer", torch_dtype=torch.float16)
25+
transformer = SanaTransformer2DModel.from_pretrained("Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)
2626
```
2727

2828
## SanaTransformer2DModel

docs/source/en/api/pipelines/flux.md

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -268,6 +268,43 @@ images = pipe(
268268
images[0].save("flux-redux.png")
269269
```
270270

271+
## Combining Flux Turbo LoRAs with Flux Control, Fill, and Redux
272+
273+
We can combine Flux Turbo LoRAs with Flux Control and other pipelines like Fill and Redux to enable few-steps' inference. The example below shows how to do that for Flux Control LoRA for depth and turbo LoRA from [`ByteDance/Hyper-SD`](https://hf.co/ByteDance/Hyper-SD).
274+
275+
```py
276+
from diffusers import FluxControlPipeline
277+
from image_gen_aux import DepthPreprocessor
278+
from diffusers.utils import load_image
279+
from huggingface_hub import hf_hub_download
280+
import torch
281+
282+
control_pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
283+
control_pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora", adapter_name="depth")
284+
control_pipe.load_lora_weights(
285+
hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"), adapter_name="hyper-sd"
286+
)
287+
control_pipe.set_adapters(["depth", "hyper-sd"], adapter_weights=[0.85, 0.125])
288+
control_pipe.enable_model_cpu_offload()
289+
290+
prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
291+
control_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")
292+
293+
processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf")
294+
control_image = processor(control_image)[0].convert("RGB")
295+
296+
image = control_pipe(
297+
prompt=prompt,
298+
control_image=control_image,
299+
height=1024,
300+
width=1024,
301+
num_inference_steps=8,
302+
guidance_scale=10.0,
303+
generator=torch.Generator().manual_seed(42),
304+
).images[0]
305+
image.save("output.png")
306+
```
307+
271308
## Running FP16 inference
272309

273310
Flux can generate high-quality images with FP16 (i.e. to accelerate inference on Turing/Volta GPUs) but produces different outputs compared to FP32/BF16. The issue is that some activations in the text encoders have to be clipped when running in FP16, which affects the overall image. Forcing text encoders to run with FP32 inference thus removes this output difference. See [here](https://github.com/huggingface/diffusers/pull/9097#issuecomment-2272292516) for details.

docs/source/en/api/pipelines/hunyuan_video.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Recommendations for inference:
2929
- Transformer should be in `torch.bfloat16`.
3030
- VAE should be in `torch.float16`.
3131
- `num_frames` should be of the form `4 * k + 1`, for example `49` or `129`.
32-
- For smaller resolution images, try lower values of `shift` (between `2.0` to `5.0`) in the [Scheduler](https://huggingface.co/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler.shift). For larger resolution images, try higher values (between `7.0` and `12.0`). The default value is `7.0` for HunyuanVideo.
32+
- For smaller resolution videos, try lower values of `shift` (between `2.0` to `5.0`) in the [Scheduler](https://huggingface.co/docs/diffusers/main/en/api/schedulers/flow_match_euler_discrete#diffusers.FlowMatchEulerDiscreteScheduler.shift). For larger resolution images, try higher values (between `7.0` and `12.0`). The default value is `7.0` for HunyuanVideo.
3333
- For more information about supported resolutions and other details, please refer to the original repository [here](https://github.com/Tencent/HunyuanVideo/).
3434

3535
## HunyuanVideoPipeline

0 commit comments

Comments
 (0)