Skip to content

Commit 8601c61

Browse files
authored
Merge branch 'main' into device_map_tests_common
2 parents fdb56cf + d72184e commit 8601c61

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+5293
-141
lines changed

docs/source/en/_toctree.yml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -180,6 +180,8 @@
180180
title: Caching
181181
- local: optimization/memory
182182
title: Reduce memory usage
183+
- local: optimization/pruna
184+
title: Pruna
183185
- local: optimization/xformers
184186
title: xFormers
185187
- local: optimization/tome
@@ -283,6 +285,8 @@
283285
title: AllegroTransformer3DModel
284286
- local: api/models/aura_flow_transformer2d
285287
title: AuraFlowTransformer2DModel
288+
- local: api/models/chroma_transformer
289+
title: ChromaTransformer2DModel
286290
- local: api/models/cogvideox_transformer3d
287291
title: CogVideoXTransformer3DModel
288292
- local: api/models/cogview3plus_transformer2d
@@ -405,6 +409,8 @@
405409
title: AutoPipeline
406410
- local: api/pipelines/blip_diffusion
407411
title: BLIP-Diffusion
412+
- local: api/pipelines/chroma
413+
title: Chroma
408414
- local: api/pipelines/cogvideox
409415
title: CogVideoX
410416
- local: api/pipelines/cogview3
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# ChromaTransformer2DModel
14+
15+
A modified flux Transformer model from [Chroma](https://huggingface.co/lodestones/Chroma)
16+
17+
## ChromaTransformer2DModel
18+
19+
[[autodoc]] ChromaTransformer2DModel
Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# Chroma
14+
15+
<div class="flex flex-wrap space-x-1">
16+
<img alt="LoRA" src="https://img.shields.io/badge/LoRA-d8b4fe?style=flat"/>
17+
<img alt="MPS" src="https://img.shields.io/badge/MPS-000000?style=flat&logo=apple&logoColor=white%22">
18+
</div>
19+
20+
Chroma is a text to image generation model based on Flux.
21+
22+
Original model checkpoints for Chroma can be found [here](https://huggingface.co/lodestones/Chroma).
23+
24+
<Tip>
25+
26+
Chroma can use all the same optimizations as Flux.
27+
28+
</Tip>
29+
30+
## Inference (Single File)
31+
32+
The `ChromaTransformer2DModel` supports loading checkpoints in the original format. This is also useful when trying to load finetunes or quantized versions of the models that have been published by the community.
33+
34+
The following example demonstrates how to run Chroma from a single file.
35+
36+
Then run the following example
37+
38+
```python
39+
import torch
40+
from diffusers import ChromaTransformer2DModel, ChromaPipeline
41+
from transformers import T5EncoderModel
42+
43+
bfl_repo = "black-forest-labs/FLUX.1-dev"
44+
dtype = torch.bfloat16
45+
46+
transformer = ChromaTransformer2DModel.from_single_file("https://huggingface.co/lodestones/Chroma/blob/main/chroma-unlocked-v35.safetensors", torch_dtype=dtype)
47+
48+
text_encoder = T5EncoderModel.from_pretrained(bfl_repo, subfolder="text_encoder_2", torch_dtype=dtype)
49+
tokenizer = T5Tokenizer.from_pretrained(bfl_repo, subfolder="tokenizer_2", torch_dtype=dtype)
50+
51+
pipe = ChromaPipeline.from_pretrained(bfl_repo, transformer=transformer, text_encoder=text_encoder, tokenizer=tokenizer, torch_dtype=dtype)
52+
53+
pipe.enable_model_cpu_offload()
54+
55+
prompt = "A cat holding a sign that says hello world"
56+
image = pipe(
57+
prompt,
58+
guidance_scale=4.0,
59+
output_type="pil",
60+
num_inference_steps=26,
61+
generator=torch.Generator("cpu").manual_seed(0)
62+
).images[0]
63+
64+
image.save("image.png")
65+
```
66+
67+
## ChromaPipeline
68+
69+
[[autodoc]] ChromaPipeline
70+
- all
71+
- __call__

docs/source/en/api/pipelines/cosmos.md

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,22 @@ Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers)
3636
- all
3737
- __call__
3838

39+
## Cosmos2TextToImagePipeline
40+
41+
[[autodoc]] Cosmos2TextToImagePipeline
42+
- all
43+
- __call__
44+
45+
## Cosmos2VideoToWorldPipeline
46+
47+
[[autodoc]] Cosmos2VideoToWorldPipeline
48+
- all
49+
- __call__
50+
3951
## CosmosPipelineOutput
4052

4153
[[autodoc]] pipelines.cosmos.pipeline_output.CosmosPipelineOutput
54+
55+
## CosmosImagePipelineOutput
56+
57+
[[autodoc]] pipelines.cosmos.pipeline_output.CosmosImagePipelineOutput
Lines changed: 187 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,187 @@
1+
# Pruna
2+
3+
[Pruna](https://github.com/PrunaAI/pruna) is a model optimization framework that offers various optimization methods - quantization, pruning, caching, compilation - for accelerating inference and reducing memory usage. A general overview of the optimization methods are shown below.
4+
5+
6+
| Technique | Description | Speed | Memory | Quality |
7+
|--------------|-----------------------------------------------------------------------------------------------|:-----:|:------:|:-------:|
8+
| `batcher` | Groups multiple inputs together to be processed simultaneously, improving computational efficiency and reducing processing time. ||||
9+
| `cacher` | Stores intermediate results of computations to speed up subsequent operations. ||||
10+
| `compiler` | Optimises the model with instructions for specific hardware. ||||
11+
| `distiller` | Trains a smaller, simpler model to mimic a larger, more complex model. ||||
12+
| `quantizer` | Reduces the precision of weights and activations, lowering memory requirements. ||||
13+
| `pruner` | Removes less important or redundant connections and neurons, resulting in a sparser, more efficient network. ||||
14+
| `recoverer` | Restores the performance of a model after compression. ||||
15+
| `factorizer` | Factorization batches several small matrix multiplications into one large fused operation. ||||
16+
| `enhancer` | Enhances the model output by applying post-processing algorithms such as denoising or upscaling. || - ||
17+
18+
✅ (improves), ➖ (approx. the same), ❌ (worsens)
19+
20+
Explore the full range of optimization methods in the [Pruna documentation](https://docs.pruna.ai/en/stable/docs_pruna/user_manual/configure.html#configure-algorithms).
21+
22+
## Installation
23+
24+
Install Pruna with the following command.
25+
26+
```bash
27+
pip install pruna
28+
```
29+
30+
31+
## Optimize Diffusers models
32+
33+
A broad range of optimization algorithms are supported for Diffusers models as shown below.
34+
35+
<div class="flex justify-center">
36+
<img src="https://huggingface.co/datasets/PrunaAI/documentation-images/resolve/main/diffusers/diffusers_combinations.png" alt="Overview of the supported optimization algorithms for diffusers models">
37+
</div>
38+
39+
The example below optimizes [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)
40+
with a combination of factorizer, compiler, and cacher algorithms. This combination accelerates inference by up to 4.2x and cuts peak GPU memory usage from 34.7GB to 28.0GB, all while maintaining virtually the same output quality.
41+
42+
> [!TIP]
43+
> Refer to the [Pruna optimization](https://docs.pruna.ai/en/stable/docs_pruna/user_manual/configure.html) docs to learn more about the optimization techniques used in this example.
44+
45+
<div class="flex justify-center">
46+
<img src="https://huggingface.co/datasets/PrunaAI/documentation-images/resolve/main/diffusers/flux_combination.png" alt="Optimization techniques used for FLUX.1-dev showing the combination of factorizer, compiler, and cacher algorithms">
47+
</div>
48+
49+
Start by defining a `SmashConfig` with the optimization algorithms to use. To optimize the model, wrap the pipeline and the `SmashConfig` with `smash` and then use the pipeline as normal for inference.
50+
51+
```python
52+
import torch
53+
from diffusers import FluxPipeline
54+
55+
from pruna import PrunaModel, SmashConfig, smash
56+
57+
# load the model
58+
# Try segmind/Segmind-Vega or black-forest-labs/FLUX.1-schnell with a small GPU memory
59+
pipe = FluxPipeline.from_pretrained(
60+
"black-forest-labs/FLUX.1-dev",
61+
torch_dtype=torch.bfloat16
62+
).to("cuda")
63+
64+
# define the configuration
65+
smash_config = SmashConfig()
66+
smash_config["factorizer"] = "qkv_diffusers"
67+
smash_config["compiler"] = "torch_compile"
68+
smash_config["torch_compile_target"] = "module_list"
69+
smash_config["cacher"] = "fora"
70+
smash_config["fora_interval"] = 2
71+
72+
# for the best results in terms of speed you can add these configs
73+
# however they will increase your warmup time from 1.5 min to 10 min
74+
# smash_config["torch_compile_mode"] = "max-autotune-no-cudagraphs"
75+
# smash_config["quantizer"] = "torchao"
76+
# smash_config["torchao_quant_type"] = "fp8dq"
77+
# smash_config["torchao_excluded_modules"] = "norm+embedding"
78+
79+
# optimize the model
80+
smashed_pipe = smash(pipe, smash_config)
81+
82+
# run the model
83+
smashed_pipe("a knitted purple prune").images[0]
84+
```
85+
86+
<div class="flex justify-center">
87+
<img src="https://huggingface.co/datasets/PrunaAI/documentation-images/resolve/main/diffusers/flux_smashed_comparison.png">
88+
</div>
89+
90+
After optimization, we can share and load the optimized model using the Hugging Face Hub.
91+
92+
```python
93+
# save the model
94+
smashed_pipe.save_to_hub("<username>/FLUX.1-dev-smashed")
95+
96+
# load the model
97+
smashed_pipe = PrunaModel.from_hub("<username>/FLUX.1-dev-smashed")
98+
```
99+
100+
## Evaluate and benchmark Diffusers models
101+
102+
Pruna provides the [EvaluationAgent](https://docs.pruna.ai/en/stable/docs_pruna/user_manual/evaluate.html) to evaluate the quality of your optimized models.
103+
104+
We can metrics we care about, such as total time and throughput, and the dataset to evaluate on. We can define a model and pass it to the `EvaluationAgent`.
105+
106+
<hfoptions id="eval">
107+
<hfoption id="optimized model">
108+
109+
We can load and evaluate an optimized model by using the `EvaluationAgent` and pass it to the `Task`.
110+
111+
```python
112+
import torch
113+
from diffusers import FluxPipeline
114+
115+
from pruna import PrunaModel
116+
from pruna.data.pruna_datamodule import PrunaDataModule
117+
from pruna.evaluation.evaluation_agent import EvaluationAgent
118+
from pruna.evaluation.metrics import (
119+
ThroughputMetric,
120+
TorchMetricWrapper,
121+
TotalTimeMetric,
122+
)
123+
from pruna.evaluation.task import Task
124+
125+
# define the device
126+
device = "cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu"
127+
128+
# load the model
129+
# Try PrunaAI/Segmind-Vega-smashed or PrunaAI/FLUX.1-dev-smashed with a small GPU memory
130+
smashed_pipe = PrunaModel.from_hub("PrunaAI/FLUX.1-dev-smashed")
131+
132+
# Define the metrics
133+
metrics = [
134+
TotalTimeMetric(n_iterations=20, n_warmup_iterations=5),
135+
ThroughputMetric(n_iterations=20, n_warmup_iterations=5),
136+
TorchMetricWrapper("clip"),
137+
]
138+
139+
# Define the datamodule
140+
datamodule = PrunaDataModule.from_string("LAION256")
141+
datamodule.limit_datasets(10)
142+
143+
# Define the task and evaluation agent
144+
task = Task(metrics, datamodule=datamodule, device=device)
145+
eval_agent = EvaluationAgent(task)
146+
147+
# Evaluate smashed model and offload it to CPU
148+
smashed_pipe.move_to_device(device)
149+
smashed_pipe_results = eval_agent.evaluate(smashed_pipe)
150+
smashed_pipe.move_to_device("cpu")
151+
```
152+
153+
</hfoption>
154+
<hfoption id="standalone model">
155+
156+
Instead of comparing the optimized model to the base model, you can also evaluate the standalone `diffusers` model. This is useful if you want to evaluate the performance of the model without the optimization. We can do so by using the `PrunaModel` wrapper and run the `EvaluationAgent` on it.
157+
158+
```python
159+
import torch
160+
from diffusers import FluxPipeline
161+
162+
from pruna import PrunaModel
163+
164+
# load the model
165+
# Try PrunaAI/Segmind-Vega-smashed or PrunaAI/FLUX.1-dev-smashed with a small GPU memory
166+
pipe = FluxPipeline.from_pretrained(
167+
"black-forest-labs/FLUX.1-dev",
168+
torch_dtype=torch.bfloat16
169+
).to("cpu")
170+
wrapped_pipe = PrunaModel(model=pipe)
171+
```
172+
173+
</hfoption>
174+
</hfoptions>
175+
176+
Now that you have seen how to optimize and evaluate your models, you can start using Pruna to optimize your own models. Luckily, we have many examples to help you get started.
177+
178+
> [!TIP]
179+
> For more details about benchmarking Flux, check out the [Announcing FLUX-Juiced: The Fastest Image Generation Endpoint (2.6 times faster)!](https://huggingface.co/blog/PrunaAI/flux-fastest-image-generation-endpoint) blog post and the [InferBench](https://huggingface.co/spaces/PrunaAI/InferBench) Space.
180+
181+
## Reference
182+
183+
- [Pruna](https://github.com/pruna-ai/pruna)
184+
- [Pruna optimization](https://docs.pruna.ai/en/stable/docs_pruna/user_manual/configure.html#configure-algorithms)
185+
- [Pruna evaluation](https://docs.pruna.ai/en/stable/docs_pruna/user_manual/evaluate.html)
186+
- [Pruna tutorials](https://docs.pruna.ai/en/stable/docs_pruna/tutorials/index.html)
187+

examples/advanced_diffusion_training/README_flux.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -76,6 +76,24 @@ This command will prompt you for a token. Copy-paste yours from your [settings/t
7676
> `pip install wandb`
7777
> Alternatively, you can use other tools / train without reporting by modifying the flag `--report_to="wandb"`.
7878
79+
### LoRA Rank and Alpha
80+
Two key LoRA hyperparameters are LoRA rank and LoRA alpha.
81+
- `--rank`: Defines the dimension of the trainable LoRA matrices. A higher rank means more expressiveness and capacity to learn (and more parameters).
82+
- `--lora_alpha`: A scaling factor for the LoRA's output. The LoRA update is scaled by lora_alpha / lora_rank.
83+
- lora_alpha vs. rank:
84+
This ratio dictates the LoRA's effective strength:
85+
lora_alpha == rank: Scaling factor is 1. The LoRA is applied with its learned strength. (e.g., alpha=16, rank=16)
86+
lora_alpha < rank: Scaling factor < 1. Reduces the LoRA's impact. Useful for subtle changes or to prevent overpowering the base model. (e.g., alpha=8, rank=16)
87+
lora_alpha > rank: Scaling factor > 1. Amplifies the LoRA's impact. Allows a lower rank LoRA to have a stronger effect. (e.g., alpha=32, rank=16)
88+
89+
> [!TIP]
90+
> A common starting point is to set `lora_alpha` equal to `rank`.
91+
> Some also set `lora_alpha` to be twice the `rank` (e.g., lora_alpha=32 for lora_rank=16)
92+
> to give the LoRA updates more influence without increasing parameter count.
93+
> If you find your LoRA is "overcooking" or learning too aggressively, consider setting `lora_alpha` to half of `rank`
94+
> (e.g., lora_alpha=8 for rank=16). Experimentation is often key to finding the optimal balance for your use case.
95+
96+
7997
### Target Modules
8098
When LoRA was first adapted from language models to diffusion models, it was applied to the cross-attention layers in the Unet that relate the image representations with the prompts that describe them.
8199
More recently, SOTA text-to-image diffusion models replaced the Unet with a diffusion Transformer(DiT). With this change, we may also want to explore

0 commit comments

Comments
 (0)