Skip to content

Commit 4e68e84

Browse files
authored
Merge branch 'main' into expand-flux-lora
2 parents 7f7b2c1 + f781b8c commit 4e68e84

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+3895
-56
lines changed

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -400,6 +400,8 @@
400400
title: DiT
401401
- local: api/pipelines/flux
402402
title: Flux
403+
- local: api/pipelines/control_flux_inpaint
404+
title: FluxControlInpaint
403405
- local: api/pipelines/hunyuandit
404406
title: Hunyuan-DiT
405407
- local: api/pipelines/hunyuan_video
Lines changed: 89 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,89 @@
1+
<!--Copyright 2024 The HuggingFace Team, The Black Forest Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# FluxControlInpaint
14+
15+
FluxControlInpaintPipeline is an implementation of Inpainting for Flux.1 Depth/Canny models. It is a pipeline that allows you to inpaint images using the Flux.1 Depth/Canny models. The pipeline takes an image and a mask as input and returns the inpainted image.
16+
17+
FLUX.1 Depth and Canny [dev] is a 12 billion parameter rectified flow transformer capable of generating an image based on a text description while following the structure of a given input image. **This is not a ControlNet model**.
18+
19+
| Control type | Developer | Link |
20+
| -------- | ---------- | ---- |
21+
| Depth | [Black Forest Labs](https://huggingface.co/black-forest-labs) | [Link](https://huggingface.co/black-forest-labs/FLUX.1-Depth-dev) |
22+
| Canny | [Black Forest Labs](https://huggingface.co/black-forest-labs) | [Link](https://huggingface.co/black-forest-labs/FLUX.1-Canny-dev) |
23+
24+
25+
<Tip>
26+
27+
Flux can be quite expensive to run on consumer hardware devices. However, you can perform a suite of optimizations to run it faster and in a more memory-friendly manner. Check out [this section](https://huggingface.co/blog/sd3#memory-optimizations-for-sd3) for more details. Additionally, Flux can benefit from quantization for memory efficiency with a trade-off in inference latency. Refer to [this blog post](https://huggingface.co/blog/quanto-diffusers) to learn more. For an exhaustive list of resources, check out [this gist](https://gist.github.com/sayakpaul/b664605caf0aa3bf8585ab109dd5ac9c).
28+
29+
</Tip>
30+
31+
```python
32+
import torch
33+
from diffusers import FluxControlInpaintPipeline
34+
from diffusers.models.transformers import FluxTransformer2DModel
35+
from transformers import T5EncoderModel
36+
from diffusers.utils import load_image, make_image_grid
37+
from image_gen_aux import DepthPreprocessor # https://github.com/huggingface/image_gen_aux
38+
from PIL import Image
39+
import numpy as np
40+
41+
pipe = FluxControlInpaintPipeline.from_pretrained(
42+
"black-forest-labs/FLUX.1-Depth-dev",
43+
torch_dtype=torch.bfloat16,
44+
)
45+
# use following lines if you have GPU constraints
46+
# ---------------------------------------------------------------
47+
transformer = FluxTransformer2DModel.from_pretrained(
48+
"sayakpaul/FLUX.1-Depth-dev-nf4", subfolder="transformer", torch_dtype=torch.bfloat16
49+
)
50+
text_encoder_2 = T5EncoderModel.from_pretrained(
51+
"sayakpaul/FLUX.1-Depth-dev-nf4", subfolder="text_encoder_2", torch_dtype=torch.bfloat16
52+
)
53+
pipe.transformer = transformer
54+
pipe.text_encoder_2 = text_encoder_2
55+
pipe.enable_model_cpu_offload()
56+
# ---------------------------------------------------------------
57+
pipe.to("cuda")
58+
59+
prompt = "a blue robot singing opera with human-like expressions"
60+
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")
61+
62+
head_mask = np.zeros_like(image)
63+
head_mask[65:580,300:642] = 255
64+
mask_image = Image.fromarray(head_mask)
65+
66+
processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf")
67+
control_image = processor(image)[0].convert("RGB")
68+
69+
output = pipe(
70+
prompt=prompt,
71+
image=image,
72+
control_image=control_image,
73+
mask_image=mask_image,
74+
num_inference_steps=30,
75+
strength=0.9,
76+
guidance_scale=10.0,
77+
generator=torch.Generator().manual_seed(42),
78+
).images[0]
79+
make_image_grid([image, control_image, mask_image, output.resize(image.size)], rows=1, cols=4).save("output.png")
80+
```
81+
82+
## FluxControlInpaintPipeline
83+
[[autodoc]] FluxControlInpaintPipeline
84+
- all
85+
- __call__
86+
87+
88+
## FluxPipelineOutput
89+
[[autodoc]] pipelines.flux.pipeline_output.FluxPipelineOutput

docs/source/en/quantization/gguf.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,9 +25,9 @@ pip install -U gguf
2525

2626
Since GGUF is a single file format, use [`~FromSingleFileMixin.from_single_file`] to load the model and pass in the [`GGUFQuantizationConfig`].
2727

28-
When using GGUF checkpoints, the quantized weights remain in a low memory `dtype`(typically `torch.unint8`) and are dynamically dequantized and cast to the configured `compute_dtype` during each module's forward pass through the model. The `GGUFQuantizationConfig` allows you to set the `compute_dtype`.
28+
When using GGUF checkpoints, the quantized weights remain in a low memory `dtype`(typically `torch.uint8`) and are dynamically dequantized and cast to the configured `compute_dtype` during each module's forward pass through the model. The `GGUFQuantizationConfig` allows you to set the `compute_dtype`.
2929

30-
The functions used for dynamic dequantizatation are based on the great work done by [city96](https://github.com/city96/ComfyUI-GGUF), who created the Pytorch ports of the original (`numpy`)[https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/quants.py] implementation by [compilade](https://github.com/compilade).
30+
The functions used for dynamic dequantizatation are based on the great work done by [city96](https://github.com/city96/ComfyUI-GGUF), who created the Pytorch ports of the original [`numpy`](https://github.com/ggerganov/llama.cpp/blob/master/gguf-py/gguf/quants.py) implementation by [compilade](https://github.com/compilade).
3131

3232
```python
3333
import torch

docs/source/en/quantization/overview.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,8 @@ If you are new to the quantization field, we recommend you to check out these be
3333
## When to use what?
3434

3535
Diffusers currently supports the following quantization methods.
36-
- [BitsandBytes]()
37-
- [TorchAO]()
38-
- [GGUF]()
36+
- [BitsandBytes](./bitsandbytes.md)
37+
- [TorchAO](./torchao.md)
38+
- [GGUF](./gguf.md)
3939

4040
[This resource](https://huggingface.co/docs/transformers/main/en/quantization/overview#when-to-use-what) provides a good overview of the pros and cons of different quantization techniques.

docs/source/en/tutorials/using_peft_for_inference.md

Lines changed: 15 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ image
5656

5757
With the `adapter_name` parameter, it is really easy to use another adapter for inference! Load the [nerijs/pixel-art-xl](https://huggingface.co/nerijs/pixel-art-xl) adapter that has been fine-tuned to generate pixel art images and call it `"pixel"`.
5858

59-
The pipeline automatically sets the first loaded adapter (`"toy"`) as the active adapter, but you can activate the `"pixel"` adapter with the [`~diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters`] method:
59+
The pipeline automatically sets the first loaded adapter (`"toy"`) as the active adapter, but you can activate the `"pixel"` adapter with the [`~PeftAdapterMixin.set_adapters`] method:
6060

6161
```python
6262
pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel")
@@ -85,7 +85,7 @@ By default, if the most up-to-date versions of PEFT and Transformers are detecte
8585

8686
You can also merge different adapter checkpoints for inference to blend their styles together.
8787

88-
Once again, use the [`~diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters`] method to activate the `pixel` and `toy` adapters and specify the weights for how they should be merged.
88+
Once again, use the [`~PeftAdapterMixin.set_adapters`] method to activate the `pixel` and `toy` adapters and specify the weights for how they should be merged.
8989

9090
```python
9191
pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0])
@@ -114,7 +114,7 @@ Impressive! As you can see, the model generated an image that mixed the characte
114114
> [!TIP]
115115
> Through its PEFT integration, Diffusers also offers more efficient merging methods which you can learn about in the [Merge LoRAs](../using-diffusers/merge_loras) guide!
116116
117-
To return to only using one adapter, use the [`~diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters`] method to activate the `"toy"` adapter:
117+
To return to only using one adapter, use the [`~PeftAdapterMixin.set_adapters`] method to activate the `"toy"` adapter:
118118

119119
```python
120120
pipe.set_adapters("toy")
@@ -127,7 +127,7 @@ image = pipe(
127127
image
128128
```
129129

130-
Or to disable all adapters entirely, use the [`~diffusers.loaders.UNet2DConditionLoadersMixin.disable_lora`] method to return the base model.
130+
Or to disable all adapters entirely, use the [`~PeftAdapterMixin.disable_lora`] method to return the base model.
131131

132132
```python
133133
pipe.disable_lora()
@@ -140,7 +140,8 @@ image
140140
![no-lora](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_20_1.png)
141141

142142
### Customize adapters strength
143-
For even more customization, you can control how strongly the adapter affects each part of the pipeline. For this, pass a dictionary with the control strengths (called "scales") to [`~diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters`].
143+
144+
For even more customization, you can control how strongly the adapter affects each part of the pipeline. For this, pass a dictionary with the control strengths (called "scales") to [`~PeftAdapterMixin.set_adapters`].
144145

145146
For example, here's how you can turn on the adapter for the `down` parts, but turn it off for the `mid` and `up` parts:
146147
```python
@@ -195,7 +196,7 @@ image
195196

196197
![block-lora-mixed](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_block_mixed.png)
197198

198-
## Manage active adapters
199+
## Manage adapters
199200

200201
You have attached multiple adapters in this tutorial, and if you're feeling a bit lost on what adapters have been attached to the pipeline's components, use the [`~diffusers.loaders.StableDiffusionLoraLoaderMixin.get_active_adapters`] method to check the list of active adapters:
201202

@@ -212,3 +213,11 @@ list_adapters_component_wise = pipe.get_list_adapters()
212213
list_adapters_component_wise
213214
{"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]}
214215
```
216+
217+
The [`~PeftAdapterMixin.delete_adapters`] function completely removes an adapter and their LoRA layers from a model.
218+
219+
```py
220+
pipe.delete_adapters("toy")
221+
pipe.get_active_adapters()
222+
["pixel"]
223+
```

examples/community/pipeline_hunyuandit_differential_img2img.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1008,6 +1008,8 @@ def __call__(
10081008
self.transformer.inner_dim // self.transformer.num_heads,
10091009
grid_crops_coords,
10101010
(grid_height, grid_width),
1011+
device=device,
1012+
output_type="pt",
10111013
)
10121014

10131015
style = torch.tensor([0], device=device)

examples/community/regional_prompting_stable_diffusion.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ def __call__(
129129
self.power = int(rp_args["power"]) if "power" in rp_args else 1
130130

131131
prompts = prompt if isinstance(prompt, list) else [prompt]
132-
n_prompts = negative_prompt if isinstance(prompt, list) else [negative_prompt]
132+
n_prompts = negative_prompt if isinstance(negative_prompt, list) else [negative_prompt]
133133
self.batch = batch = num_images_per_prompt * len(prompts)
134134

135135
if use_base:

examples/dreambooth/README_sana.md

Lines changed: 127 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,127 @@
1+
# DreamBooth training example for SANA
2+
3+
[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image models like stable diffusion given just a few (3~5) images of a subject.
4+
5+
The `train_dreambooth_lora_sana.py` script shows how to implement the training procedure with [LoRA](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) and adapt it for [SANA](https://arxiv.org/abs/2410.10629).
6+
7+
8+
This will also allow us to push the trained model parameters to the Hugging Face Hub platform.
9+
10+
## Running locally with PyTorch
11+
12+
### Installing the dependencies
13+
14+
Before running the scripts, make sure to install the library's training dependencies:
15+
16+
**Important**
17+
18+
To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
19+
20+
```bash
21+
git clone https://github.com/huggingface/diffusers
22+
cd diffusers
23+
pip install -e .
24+
```
25+
26+
Then cd in the `examples/dreambooth` folder and run
27+
```bash
28+
pip install -r requirements_sana.txt
29+
```
30+
31+
And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with:
32+
33+
```bash
34+
accelerate config
35+
```
36+
37+
Or for a default accelerate configuration without answering questions about your environment
38+
39+
```bash
40+
accelerate config default
41+
```
42+
43+
Or if your environment doesn't support an interactive shell (e.g., a notebook)
44+
45+
```python
46+
from accelerate.utils import write_basic_config
47+
write_basic_config()
48+
```
49+
50+
When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
51+
Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.14.0` installed in your environment.
52+
53+
54+
### Dog toy example
55+
56+
Now let's get our dataset. For this example we will use some dog images: https://huggingface.co/datasets/diffusers/dog-example.
57+
58+
Let's first download it locally:
59+
60+
```python
61+
from huggingface_hub import snapshot_download
62+
63+
local_dir = "./dog"
64+
snapshot_download(
65+
"diffusers/dog-example",
66+
local_dir=local_dir, repo_type="dataset",
67+
ignore_patterns=".gitattributes",
68+
)
69+
```
70+
71+
This will also allow us to push the trained LoRA parameters to the Hugging Face Hub platform.
72+
73+
Now, we can launch training using:
74+
75+
```bash
76+
export MODEL_NAME="Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers"
77+
export INSTANCE_DIR="dog"
78+
export OUTPUT_DIR="trained-sana-lora"
79+
80+
accelerate launch train_dreambooth_lora_sana.py \
81+
--pretrained_model_name_or_path=$MODEL_NAME \
82+
--instance_data_dir=$INSTANCE_DIR \
83+
--output_dir=$OUTPUT_DIR \
84+
--mixed_precision="bf16" \
85+
--instance_prompt="a photo of sks dog" \
86+
--resolution=1024 \
87+
--train_batch_size=1 \
88+
--gradient_accumulation_steps=4 \
89+
--use_8bit_adam \
90+
--learning_rate=1e-4 \
91+
--report_to="wandb" \
92+
--lr_scheduler="constant" \
93+
--lr_warmup_steps=0 \
94+
--max_train_steps=500 \
95+
--validation_prompt="A photo of sks dog in a bucket" \
96+
--validation_epochs=25 \
97+
--seed="0" \
98+
--push_to_hub
99+
```
100+
101+
For using `push_to_hub`, make you're logged into your Hugging Face account:
102+
103+
```bash
104+
huggingface-cli login
105+
```
106+
107+
To better track our training experiments, we're using the following flags in the command above:
108+
109+
* `report_to="wandb` will ensure the training runs are tracked on [Weights and Biases](https://wandb.ai/site). To use it, be sure to install `wandb` with `pip install wandb`. Don't forget to call `wandb login <your_api_key>` before training if you haven't done it before.
110+
* `validation_prompt` and `validation_epochs` to allow the script to do a few validation inference runs. This allows us to qualitatively check if the training is progressing as expected.
111+
112+
## Notes
113+
114+
Additionally, we welcome you to explore the following CLI arguments:
115+
116+
* `--lora_layers`: The transformer modules to apply LoRA training on. Please specify the layers in a comma seperated. E.g. - "to_k,to_q,to_v" will result in lora training of attention layers only.
117+
* `--complex_human_instruction`: Instructions for complex human attention as shown in [here](https://github.com/NVlabs/Sana/blob/main/configs/sana_app_config/Sana_1600M_app.yaml#L55).
118+
* `--max_sequence_length`: Maximum sequence length to use for text embeddings.
119+
120+
121+
We provide several options for optimizing memory optimization:
122+
123+
* `--offload`: When enabled, we will offload the text encoder and VAE to CPU, when they are not used.
124+
* `cache_latents`: When enabled, we will pre-compute the latents from the input images with the VAE and remove the VAE from memory once done.
125+
* `--use_8bit_adam`: When enabled, we will use the 8bit version of AdamW provided by the `bitsandbytes` library.
126+
127+
Refer to the [official documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana) of the `SanaPipeline` to know more about the models available under the SANA family and their preferred dtypes during inference.
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
accelerate>=1.0.0
2+
torchvision
3+
transformers>=4.47.0
4+
ftfy
5+
tensorboard
6+
Jinja2
7+
peft>=0.14.0
8+
sentencepiece

0 commit comments

Comments
 (0)