Skip to content

Commit 12c9ac4

Browse files
authored
Merge branch 'main' into feat/check-doc-listing
2 parents 15b2f57 + fdb1baa commit 12c9ac4

File tree

65 files changed

+2040
-92
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

65 files changed

+2040
-92
lines changed

.github/workflows/pr_tests.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -157,7 +157,7 @@ jobs:
157157
if: ${{ matrix.config.framework == 'pytorch_examples' }}
158158
run: |
159159
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
160-
python -m uv pip install peft
160+
python -m uv pip install peft timm
161161
python -m pytest -n 4 --max-worker-restart=0 --dist=loadfile \
162162
--make-reports=tests_${{ matrix.config.report }} \
163163
examples

.github/workflows/push_tests.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -426,6 +426,7 @@ jobs:
426426
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
427427
run: |
428428
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
429+
python -m uv pip install timm
429430
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v --make-reports=examples_torch_cuda examples/
430431
431432
- name: Failure short reports

.github/workflows/push_tests_fast.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,7 @@ jobs:
107107
if: ${{ matrix.config.framework == 'pytorch_examples' }}
108108
run: |
109109
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
110-
python -m uv pip install peft
110+
python -m uv pip install peft timm
111111
python -m pytest -n 4 --max-worker-restart=0 --dist=loadfile \
112112
--make-reports=tests_${{ matrix.config.report }} \
113113
examples

.github/workflows/push_tests_mps.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ concurrency:
2323
jobs:
2424
run_fast_tests_apple_m1:
2525
name: Fast PyTorch MPS tests on MacOS
26-
runs-on: [ self-hosted, apple-m1 ]
26+
runs-on: macos-13-xlarge
2727

2828
steps:
2929
- name: Checkout diffusers

docs/source/en/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -305,6 +305,8 @@
305305
title: Personalized Image Animator (PIA)
306306
- local: api/pipelines/pixart
307307
title: PixArt-α
308+
- local: api/pipelines/pixart_sigma
309+
title: PixArt-Σ
308310
- local: api/pipelines/self_attention_guidance
309311
title: Self-Attention Guidance
310312
- local: api/pipelines/semantic_stable_diffusion

docs/source/en/api/pipelines/pixart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Some notes about this pipeline:
3131

3232
<Tip>
3333

34-
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
34+
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading.md#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
3535

3636
</Tip>
3737

Lines changed: 151 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,151 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# PixArt-Σ
14+
15+
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/pixart/header_collage_sigma.jpg)
16+
17+
[PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation](https://huggingface.co/papers/2403.04692) is Junsong Chen, Jincheng Yu, Chongjian Ge, Lewei Yao, Enze Xie, Yue Wu, Zhongdao Wang, James Kwok, Ping Luo, Huchuan Lu, and Zhenguo Li.
18+
19+
The abstract from the paper is:
20+
21+
*In this paper, we introduce PixArt-Σ, a Diffusion Transformer model (DiT) capable of directly generating images at 4K resolution. PixArt-Σ represents a significant advancement over its predecessor, PixArt-α, offering images of markedly higher fidelity and improved alignment with text prompts. A key feature of PixArt-Σ is its training efficiency. Leveraging the foundational pre-training of PixArt-α, it evolves from the ‘weaker’ baseline to a ‘stronger’ model via incorporating higher quality data, a process we term “weak-to-strong training”. The advancements in PixArt-Σ are twofold: (1) High-Quality Training Data: PixArt-Σ incorporates superior-quality image data, paired with more precise and detailed image captions. (2) Efficient Token Compression: we propose a novel attention module within the DiT framework that compresses both keys and values, significantly improving efficiency and facilitating ultra-high-resolution image generation. Thanks to these improvements, PixArt-Σ achieves superior image quality and user prompt adherence capabilities with significantly smaller model size (0.6B parameters) than existing text-to-image diffusion models, such as SDXL (2.6B parameters) and SD Cascade (5.1B parameters). Moreover, PixArt-Σ’s capability to generate 4K images supports the creation of high-resolution posters and wallpapers, efficiently bolstering the production of highquality visual content in industries such as film and gaming.*
22+
23+
You can find the original codebase at [PixArt-alpha/PixArt-sigma](https://github.com/PixArt-alpha/PixArt-sigma) and all the available checkpoints at [PixArt-alpha](https://huggingface.co/PixArt-alpha).
24+
25+
Some notes about this pipeline:
26+
27+
* It uses a Transformer backbone (instead of a UNet) for denoising. As such it has a similar architecture as [DiT](https://hf.co/docs/transformers/model_doc/dit).
28+
* It was trained using text conditions computed from T5. This aspect makes the pipeline better at following complex text prompts with intricate details.
29+
* It is good at producing high-resolution images at different aspect ratios. To get the best results, the authors recommend some size brackets which can be found [here](https://github.com/PixArt-alpha/PixArt-sigma/blob/master/diffusion/data/datasets/utils.py).
30+
* It rivals the quality of state-of-the-art text-to-image generation systems (as of this writing) such as PixArt-α, Stable Diffusion XL, Playground V2.0 and DALL-E 3, while being more efficient than them.
31+
* It shows the ability of generating super high resolution images, such as 2048px or even 4K.
32+
* It shows that text-to-image models can grow from a weak model to a stronger one through several improvements (VAEs, datasets, and so on.)
33+
34+
<Tip>
35+
36+
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
37+
38+
</Tip>
39+
40+
## Inference with under 8GB GPU VRAM
41+
42+
Run the [`PixArtSigmaPipeline`] with under 8GB GPU VRAM by loading the text encoder in 8-bit precision. Let's walk through a full-fledged example.
43+
44+
First, install the [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) library:
45+
46+
```bash
47+
pip install -U bitsandbytes
48+
```
49+
50+
Then load the text encoder in 8-bit:
51+
52+
```python
53+
from transformers import T5EncoderModel
54+
from diffusers import PixArtSigmaPipeline
55+
import torch
56+
57+
text_encoder = T5EncoderModel.from_pretrained(
58+
"PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
59+
subfolder="text_encoder",
60+
load_in_8bit=True,
61+
device_map="auto",
62+
63+
)
64+
pipe = PixArtSigmaPipeline.from_pretrained(
65+
"PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
66+
text_encoder=text_encoder,
67+
transformer=None,
68+
device_map="balanced"
69+
)
70+
```
71+
72+
Now, use the `pipe` to encode a prompt:
73+
74+
```python
75+
with torch.no_grad():
76+
prompt = "cute cat"
77+
prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt)
78+
```
79+
80+
Since text embeddings have been computed, remove the `text_encoder` and `pipe` from the memory, and free up som GPU VRAM:
81+
82+
```python
83+
import gc
84+
85+
def flush():
86+
gc.collect()
87+
torch.cuda.empty_cache()
88+
89+
del text_encoder
90+
del pipe
91+
flush()
92+
```
93+
94+
Then compute the latents with the prompt embeddings as inputs:
95+
96+
```python
97+
pipe = PixArtSigmaPipeline.from_pretrained(
98+
"PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
99+
text_encoder=None,
100+
torch_dtype=torch.float16,
101+
).to("cuda")
102+
103+
latents = pipe(
104+
negative_prompt=None,
105+
prompt_embeds=prompt_embeds,
106+
negative_prompt_embeds=negative_embeds,
107+
prompt_attention_mask=prompt_attention_mask,
108+
negative_prompt_attention_mask=negative_prompt_attention_mask,
109+
num_images_per_prompt=1,
110+
output_type="latent",
111+
).images
112+
113+
del pipe.transformer
114+
flush()
115+
```
116+
117+
<Tip>
118+
119+
Notice that while initializing `pipe`, you're setting `text_encoder` to `None` so that it's not loaded.
120+
121+
</Tip>
122+
123+
Once the latents are computed, pass it off to the VAE to decode into a real image:
124+
125+
```python
126+
with torch.no_grad():
127+
image = pipe.vae.decode(latents / pipe.vae.config.scaling_factor, return_dict=False)[0]
128+
image = pipe.image_processor.postprocess(image, output_type="pil")[0]
129+
image.save("cat.png")
130+
```
131+
132+
By deleting components you aren't using and flushing the GPU VRAM, you should be able to run [`PixArtSigmaPipeline`] with under 8GB GPU VRAM.
133+
134+
![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/pixart/8bits_cat.png)
135+
136+
If you want a report of your memory-usage, run this [script](https://gist.github.com/sayakpaul/3ae0f847001d342af27018a96f467e4e).
137+
138+
<Tip warning={true}>
139+
140+
Text embeddings computed in 8-bit can impact the quality of the generated images because of the information loss in the representation space caused by the reduced precision. It's recommended to compare the outputs with and without 8-bit.
141+
142+
</Tip>
143+
144+
While loading the `text_encoder`, you set `load_in_8bit` to `True`. You could also specify `load_in_4bit` to bring your memory requirements down even further to under 7GB.
145+
146+
## PixArtSigmaPipeline
147+
148+
[[autodoc]] PixArtSigmaPipeline
149+
- all
150+
- __call__
151+

docs/source/en/api/video_processor.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,10 @@ specific language governing permissions and limitations under the License.
1212

1313
# Video Processor
1414

15-
The `VideoProcessor` provides a unified API for video pipelines to prepare inputs for VAE encoding and post-processing outputs once they're decoded. The class inherits [`VaeImageProcessor`] so it includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays.
15+
The [`VideoProcessor`] provides a unified API for video pipelines to prepare inputs for VAE encoding and post-processing outputs once they're decoded. The class inherits [`VaeImageProcessor`] so it includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays.
16+
17+
## VideoProcessor
18+
19+
[[autodoc]] video_processor.VideoProcessor.preprocess_video
20+
21+
[[autodoc]] video_processor.VideoProcessor.postprocess_video

docs/source/en/optimization/fp16.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ specific language governing permissions and limitations under the License.
1212

1313
# Speed up inference
1414

15-
There are several ways to optimize Diffusers for inference speed, such as reducing the computational burden by lowering the data precision or using a lightweight distilled model. There are also memory-efficient attention implementations, [xFormers](xformers) and [scaled dot product attetntion](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) in PyTorch 2.0, that reduce memory usage which also indirectly speeds up inference. Different speed optimizations can be stacked together to get the fastest inference times.
15+
There are several ways to optimize Diffusers for inference speed, such as reducing the computational burden by lowering the data precision or using a lightweight distilled model. There are also memory-efficient attention implementations, [xFormers](xformers) and [scaled dot product attention](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) in PyTorch 2.0, that reduce memory usage which also indirectly speeds up inference. Different speed optimizations can be stacked together to get the fastest inference times.
1616

1717
> [!TIP]
1818
> Optimizing for inference speed or reduced memory usage can lead to improved performance in the other category, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about lowering memory usage in the [Reduce memory usage](memory) guide.

examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -981,7 +981,7 @@ def collate_fn(examples, with_prior_preservation=False):
981981

982982

983983
class PromptDataset(Dataset):
984-
"A simple dataset to prepare the prompts to generate class images on multiple GPUs."
984+
"""A simple dataset to prepare the prompts to generate class images on multiple GPUs."""
985985

986986
def __init__(self, prompt, num_samples):
987987
self.prompt = prompt

0 commit comments

Comments
 (0)