Skip to content

Commit 820a454

Browse files
authored
Merge branch 'main' into sigmas_flux_tests
2 parents 5d6b21c + c1926ce commit 820a454

36 files changed

+1997
-1379
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ Check out the [Quickstart](https://huggingface.co/docs/diffusers/quicktour) to l
114114
| [Tutorial](https://huggingface.co/docs/diffusers/tutorials/tutorial_overview) | A basic crash course for learning how to use the library's most important features like using models and schedulers to build your own diffusion system, and training your own diffusion model. |
115115
| [Loading](https://huggingface.co/docs/diffusers/using-diffusers/loading_overview) | Guides for how to load and configure all the components (pipelines, models, and schedulers) of the library, as well as how to use different schedulers. |
116116
| [Pipelines for inference](https://huggingface.co/docs/diffusers/using-diffusers/pipeline_overview) | Guides for how to use pipelines for different inference tasks, batched generation, controlling generated outputs and randomness, and how to contribute a pipeline to the library. |
117-
| [Optimization](https://huggingface.co/docs/diffusers/optimization/opt_overview) | Guides for how to optimize your diffusion model to run faster and consume less memory. |
117+
| [Optimization](https://huggingface.co/docs/diffusers/optimization/fp16) | Guides for how to optimize your diffusion model to run faster and consume less memory. |
118118
| [Training](https://huggingface.co/docs/diffusers/training/overview) | Guides for how to train a diffusion model for different tasks with different training techniques. |
119119
## Contribution
120120

examples/community/README.md

Lines changed: 14 additions & 8 deletions
Large diffs are not rendered by default.

examples/community/README_community_scripts.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@ If a community script doesn't work as expected, please open an issue and ping th
66

77
| Example | Description | Code Example | Colab | Author |
88
|:--------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------:|
9-
| Using IP-Adapter with Negative Noise | Using negative noise with IP-adapter to better control the generation (see the [original post](https://github.com/huggingface/diffusers/discussions/7167) on the forum for more details) | [IP-Adapter Negative Noise](#ip-adapter-negative-noise) | https://github.com/huggingface/notebooks/blob/main/diffusers/ip_adapter_negative_noise.ipynb | [Álvaro Somoza](https://github.com/asomoza)|
10-
| Asymmetric Tiling |configure seamless image tiling independently for the X and Y axes | [Asymmetric Tiling](#Asymmetric-Tiling ) |https://github.com/huggingface/notebooks/blob/main/diffusers/asymetric_tiling.ipynb | [alexisrolland](https://github.com/alexisrolland)|
11-
| Prompt Scheduling Callback |Allows changing prompts during a generation | [Prompt Scheduling-Callback](#Prompt-Scheduling-Callback ) |https://github.com/huggingface/notebooks/blob/main/diffusers/prompt_scheduling_callback.ipynb | [hlky](https://github.com/hlky)|
9+
| Using IP-Adapter with Negative Noise | Using negative noise with IP-adapter to better control the generation (see the [original post](https://github.com/huggingface/diffusers/discussions/7167) on the forum for more details) | [IP-Adapter Negative Noise](#ip-adapter-negative-noise) |[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/ip_adapter_negative_noise.ipynb) | [Álvaro Somoza](https://github.com/asomoza)|
10+
| Asymmetric Tiling |configure seamless image tiling independently for the X and Y axes | [Asymmetric Tiling](#Asymmetric-Tiling ) |[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/asymetric_tiling.ipynb) | [alexisrolland](https://github.com/alexisrolland)|
11+
| Prompt Scheduling Callback |Allows changing prompts during a generation | [Prompt Scheduling-Callback](#Prompt-Scheduling-Callback ) |[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/prompt_scheduling_callback.ipynb) | [hlky](https://github.com/hlky)|
1212

1313

1414
## Example usages

examples/controlnet/train_controlnet.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -571,9 +571,6 @@ def parse_args(input_args=None):
571571
if args.dataset_name is None and args.train_data_dir is None:
572572
raise ValueError("Specify either `--dataset_name` or `--train_data_dir`")
573573

574-
if args.dataset_name is not None and args.train_data_dir is not None:
575-
raise ValueError("Specify only one of `--dataset_name` or `--train_data_dir`")
576-
577574
if args.proportion_empty_prompts < 0 or args.proportion_empty_prompts > 1:
578575
raise ValueError("`--proportion_empty_prompts` must be in the range [0, 1].")
579576

@@ -615,6 +612,7 @@ def make_train_dataset(args, tokenizer, accelerator):
615612
args.dataset_name,
616613
args.dataset_config_name,
617614
cache_dir=args.cache_dir,
615+
data_dir=args.train_data_dir,
618616
)
619617
else:
620618
if args.train_data_dir is not None:

examples/controlnet/train_controlnet_sdxl.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -598,9 +598,6 @@ def parse_args(input_args=None):
598598
if args.dataset_name is None and args.train_data_dir is None:
599599
raise ValueError("Specify either `--dataset_name` or `--train_data_dir`")
600600

601-
if args.dataset_name is not None and args.train_data_dir is not None:
602-
raise ValueError("Specify only one of `--dataset_name` or `--train_data_dir`")
603-
604601
if args.proportion_empty_prompts < 0 or args.proportion_empty_prompts > 1:
605602
raise ValueError("`--proportion_empty_prompts` must be in the range [0, 1].")
606603

@@ -642,6 +639,7 @@ def get_train_dataset(args, accelerator):
642639
args.dataset_name,
643640
args.dataset_config_name,
644641
cache_dir=args.cache_dir,
642+
data_dir=args.train_data_dir,
645643
)
646644
else:
647645
if args.train_data_dir is not None:

examples/text_to_image/train_text_to_image_sdxl.py

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -483,7 +483,6 @@ def parse_args(input_args=None):
483483
# Sanity checks
484484
if args.dataset_name is None and args.train_data_dir is None:
485485
raise ValueError("Need either a dataset name or a training folder.")
486-
487486
if args.proportion_empty_prompts < 0 or args.proportion_empty_prompts > 1:
488487
raise ValueError("`--proportion_empty_prompts` must be in the range [0, 1].")
489488

@@ -824,9 +823,7 @@ def load_model_hook(models, input_dir):
824823
if args.dataset_name is not None:
825824
# Downloading and loading a dataset from the hub.
826825
dataset = load_dataset(
827-
args.dataset_name,
828-
args.dataset_config_name,
829-
cache_dir=args.cache_dir,
826+
args.dataset_name, args.dataset_config_name, cache_dir=args.cache_dir, data_dir=args.train_data_dir
830827
)
831828
else:
832829
data_files = {}

src/diffusers/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -338,8 +338,8 @@
338338
"StableDiffusion3ControlNetPipeline",
339339
"StableDiffusion3Img2ImgPipeline",
340340
"StableDiffusion3InpaintPipeline",
341-
"StableDiffusion3PAGPipeline",
342341
"StableDiffusion3PAGImg2ImgPipeline",
342+
"StableDiffusion3PAGPipeline",
343343
"StableDiffusion3Pipeline",
344344
"StableDiffusionAdapterPipeline",
345345
"StableDiffusionAttendAndExcitePipeline",

src/diffusers/models/autoencoders/autoencoder_kl_cogvideox.py

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -433,7 +433,7 @@ def create_forward(*inputs):
433433
hidden_states,
434434
temb,
435435
zq,
436-
conv_cache=conv_cache.get(conv_cache_key),
436+
conv_cache.get(conv_cache_key),
437437
)
438438
else:
439439
hidden_states, new_conv_cache[conv_cache_key] = resnet(
@@ -531,7 +531,7 @@ def create_forward(*inputs):
531531
return create_forward
532532

533533
hidden_states, new_conv_cache[conv_cache_key] = torch.utils.checkpoint.checkpoint(
534-
create_custom_forward(resnet), hidden_states, temb, zq, conv_cache=conv_cache.get(conv_cache_key)
534+
create_custom_forward(resnet), hidden_states, temb, zq, conv_cache.get(conv_cache_key)
535535
)
536536
else:
537537
hidden_states, new_conv_cache[conv_cache_key] = resnet(
@@ -649,7 +649,7 @@ def create_forward(*inputs):
649649
hidden_states,
650650
temb,
651651
zq,
652-
conv_cache=conv_cache.get(conv_cache_key),
652+
conv_cache.get(conv_cache_key),
653653
)
654654
else:
655655
hidden_states, new_conv_cache[conv_cache_key] = resnet(
@@ -789,7 +789,7 @@ def custom_forward(*inputs):
789789
hidden_states,
790790
temb,
791791
None,
792-
conv_cache=conv_cache.get(conv_cache_key),
792+
conv_cache.get(conv_cache_key),
793793
)
794794

795795
# 2. Mid
@@ -798,14 +798,14 @@ def custom_forward(*inputs):
798798
hidden_states,
799799
temb,
800800
None,
801-
conv_cache=conv_cache.get("mid_block"),
801+
conv_cache.get("mid_block"),
802802
)
803803
else:
804804
# 1. Down
805805
for i, down_block in enumerate(self.down_blocks):
806806
conv_cache_key = f"down_block_{i}"
807807
hidden_states, new_conv_cache[conv_cache_key] = down_block(
808-
hidden_states, temb, None, conv_cache=conv_cache.get(conv_cache_key)
808+
hidden_states, temb, None, conv_cache.get(conv_cache_key)
809809
)
810810

811811
# 2. Mid
@@ -953,7 +953,7 @@ def custom_forward(*inputs):
953953
hidden_states,
954954
temb,
955955
sample,
956-
conv_cache=conv_cache.get("mid_block"),
956+
conv_cache.get("mid_block"),
957957
)
958958

959959
# 2. Up
@@ -964,7 +964,7 @@ def custom_forward(*inputs):
964964
hidden_states,
965965
temb,
966966
sample,
967-
conv_cache=conv_cache.get(conv_cache_key),
967+
conv_cache.get(conv_cache_key),
968968
)
969969
else:
970970
# 1. Mid
@@ -1476,7 +1476,7 @@ def forward(
14761476
z = posterior.sample(generator=generator)
14771477
else:
14781478
z = posterior.mode()
1479-
dec = self.decode(z)
1479+
dec = self.decode(z).sample
14801480
if not return_dict:
14811481
return (dec,)
1482-
return dec
1482+
return DecoderOutput(sample=dec)

src/diffusers/models/autoencoders/autoencoder_kl_temporal_decoder.py

Lines changed: 2 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@
1111
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
14+
import itertools
1415
from typing import Dict, Optional, Tuple, Union
1516

1617
import torch
@@ -94,7 +95,7 @@ def forward(
9495

9596
sample = self.conv_in(sample)
9697

97-
upscale_dtype = next(iter(self.up_blocks.parameters())).dtype
98+
upscale_dtype = next(itertools.chain(self.up_blocks.parameters(), self.up_blocks.buffers())).dtype
9899
if torch.is_grad_enabled() and self.gradient_checkpointing:
99100

100101
def create_custom_forward(module):
@@ -228,14 +229,6 @@ def __init__(
228229

229230
self.quant_conv = nn.Conv2d(2 * latent_channels, 2 * latent_channels, 1)
230231

231-
sample_size = (
232-
self.config.sample_size[0]
233-
if isinstance(self.config.sample_size, (list, tuple))
234-
else self.config.sample_size
235-
)
236-
self.tile_latent_min_size = int(sample_size / (2 ** (len(self.config.block_out_channels) - 1)))
237-
self.tile_overlap_factor = 0.25
238-
239232
def _set_gradient_checkpointing(self, module, value=False):
240233
if isinstance(module, (Encoder, TemporalDecoder)):
241234
module.gradient_checkpointing = value

src/diffusers/models/autoencoders/autoencoder_tiny.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -310,7 +310,9 @@ def decode(
310310
self, x: torch.Tensor, generator: Optional[torch.Generator] = None, return_dict: bool = True
311311
) -> Union[DecoderOutput, Tuple[torch.Tensor]]:
312312
if self.use_slicing and x.shape[0] > 1:
313-
output = [self._tiled_decode(x_slice) if self.use_tiling else self.decoder(x) for x_slice in x.split(1)]
313+
output = [
314+
self._tiled_decode(x_slice) if self.use_tiling else self.decoder(x_slice) for x_slice in x.split(1)
315+
]
314316
output = torch.cat(output)
315317
else:
316318
output = self._tiled_decode(x) if self.use_tiling else self.decoder(x)
@@ -341,7 +343,7 @@ def forward(
341343
# as if we were loading the latents from an RGBA uint8 image.
342344
unscaled_enc = self.unscale_latents(scaled_enc / 255.0)
343345

344-
dec = self.decode(unscaled_enc)
346+
dec = self.decode(unscaled_enc).sample
345347

346348
if not return_dict:
347349
return (dec,)

0 commit comments

Comments
 (0)