Skip to content

Commit e7faaf2

Browse files
authored
Merge branch 'main' into patch-1
2 parents f29bb1a + 8421c14 commit e7faaf2

30 files changed

+1479
-101
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ Check out the [Quickstart](https://huggingface.co/docs/diffusers/quicktour) to l
114114
| [Tutorial](https://huggingface.co/docs/diffusers/tutorials/tutorial_overview) | A basic crash course for learning how to use the library's most important features like using models and schedulers to build your own diffusion system, and training your own diffusion model. |
115115
| [Loading](https://huggingface.co/docs/diffusers/using-diffusers/loading_overview) | Guides for how to load and configure all the components (pipelines, models, and schedulers) of the library, as well as how to use different schedulers. |
116116
| [Pipelines for inference](https://huggingface.co/docs/diffusers/using-diffusers/pipeline_overview) | Guides for how to use pipelines for different inference tasks, batched generation, controlling generated outputs and randomness, and how to contribute a pipeline to the library. |
117-
| [Optimization](https://huggingface.co/docs/diffusers/optimization/opt_overview) | Guides for how to optimize your diffusion model to run faster and consume less memory. |
117+
| [Optimization](https://huggingface.co/docs/diffusers/optimization/fp16) | Guides for how to optimize your diffusion model to run faster and consume less memory. |
118118
| [Training](https://huggingface.co/docs/diffusers/training/overview) | Guides for how to train a diffusion model for different tasks with different training techniques. |
119119
## Contribution
120120

docs/source/en/api/pipelines/pag.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,10 @@ Since RegEx is supported as a way for matching layer identifiers, it is crucial
9696
- all
9797
- __call__
9898

99+
## StableDiffusion3PAGImg2ImgPipeline
100+
[[autodoc]] StableDiffusion3PAGImg2ImgPipeline
101+
- all
102+
- __call__
99103

100104
## PixArtSigmaPAGPipeline
101105
[[autodoc]] PixArtSigmaPAGPipeline

examples/community/README.md

Lines changed: 14 additions & 8 deletions
Large diffs are not rendered by default.

examples/community/README_community_scripts.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@ If a community script doesn't work as expected, please open an issue and ping th
66

77
| Example | Description | Code Example | Colab | Author |
88
|:--------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------:|
9-
| Using IP-Adapter with Negative Noise | Using negative noise with IP-adapter to better control the generation (see the [original post](https://github.com/huggingface/diffusers/discussions/7167) on the forum for more details) | [IP-Adapter Negative Noise](#ip-adapter-negative-noise) | https://github.com/huggingface/notebooks/blob/main/diffusers/ip_adapter_negative_noise.ipynb | [Álvaro Somoza](https://github.com/asomoza)|
10-
| Asymmetric Tiling |configure seamless image tiling independently for the X and Y axes | [Asymmetric Tiling](#Asymmetric-Tiling ) |https://github.com/huggingface/notebooks/blob/main/diffusers/asymetric_tiling.ipynb | [alexisrolland](https://github.com/alexisrolland)|
11-
| Prompt Scheduling Callback |Allows changing prompts during a generation | [Prompt Scheduling-Callback](#Prompt-Scheduling-Callback ) |https://github.com/huggingface/notebooks/blob/main/diffusers/prompt_scheduling_callback.ipynb | [hlky](https://github.com/hlky)|
9+
| Using IP-Adapter with Negative Noise | Using negative noise with IP-adapter to better control the generation (see the [original post](https://github.com/huggingface/diffusers/discussions/7167) on the forum for more details) | [IP-Adapter Negative Noise](#ip-adapter-negative-noise) |[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/ip_adapter_negative_noise.ipynb) | [Álvaro Somoza](https://github.com/asomoza)|
10+
| Asymmetric Tiling |configure seamless image tiling independently for the X and Y axes | [Asymmetric Tiling](#Asymmetric-Tiling ) |[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/asymetric_tiling.ipynb) | [alexisrolland](https://github.com/alexisrolland)|
11+
| Prompt Scheduling Callback |Allows changing prompts during a generation | [Prompt Scheduling-Callback](#Prompt-Scheduling-Callback ) |[Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/prompt_scheduling_callback.ipynb) | [hlky](https://github.com/hlky)|
1212

1313

1414
## Example usages

examples/controlnet/train_controlnet.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -571,9 +571,6 @@ def parse_args(input_args=None):
571571
if args.dataset_name is None and args.train_data_dir is None:
572572
raise ValueError("Specify either `--dataset_name` or `--train_data_dir`")
573573

574-
if args.dataset_name is not None and args.train_data_dir is not None:
575-
raise ValueError("Specify only one of `--dataset_name` or `--train_data_dir`")
576-
577574
if args.proportion_empty_prompts < 0 or args.proportion_empty_prompts > 1:
578575
raise ValueError("`--proportion_empty_prompts` must be in the range [0, 1].")
579576

@@ -615,6 +612,7 @@ def make_train_dataset(args, tokenizer, accelerator):
615612
args.dataset_name,
616613
args.dataset_config_name,
617614
cache_dir=args.cache_dir,
615+
data_dir=args.train_data_dir,
618616
)
619617
else:
620618
if args.train_data_dir is not None:

examples/controlnet/train_controlnet_sdxl.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -598,9 +598,6 @@ def parse_args(input_args=None):
598598
if args.dataset_name is None and args.train_data_dir is None:
599599
raise ValueError("Specify either `--dataset_name` or `--train_data_dir`")
600600

601-
if args.dataset_name is not None and args.train_data_dir is not None:
602-
raise ValueError("Specify only one of `--dataset_name` or `--train_data_dir`")
603-
604601
if args.proportion_empty_prompts < 0 or args.proportion_empty_prompts > 1:
605602
raise ValueError("`--proportion_empty_prompts` must be in the range [0, 1].")
606603

@@ -642,6 +639,7 @@ def get_train_dataset(args, accelerator):
642639
args.dataset_name,
643640
args.dataset_config_name,
644641
cache_dir=args.cache_dir,
642+
data_dir=args.train_data_dir,
645643
)
646644
else:
647645
if args.train_data_dir is not None:

examples/text_to_image/train_text_to_image_sdxl.py

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -483,7 +483,6 @@ def parse_args(input_args=None):
483483
# Sanity checks
484484
if args.dataset_name is None and args.train_data_dir is None:
485485
raise ValueError("Need either a dataset name or a training folder.")
486-
487486
if args.proportion_empty_prompts < 0 or args.proportion_empty_prompts > 1:
488487
raise ValueError("`--proportion_empty_prompts` must be in the range [0, 1].")
489488

@@ -824,9 +823,7 @@ def load_model_hook(models, input_dir):
824823
if args.dataset_name is not None:
825824
# Downloading and loading a dataset from the hub.
826825
dataset = load_dataset(
827-
args.dataset_name,
828-
args.dataset_config_name,
829-
cache_dir=args.cache_dir,
826+
args.dataset_name, args.dataset_config_name, cache_dir=args.cache_dir, data_dir=args.train_data_dir
830827
)
831828
else:
832829
data_files = {}

src/diffusers/__init__.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -338,6 +338,7 @@
338338
"StableDiffusion3ControlNetPipeline",
339339
"StableDiffusion3Img2ImgPipeline",
340340
"StableDiffusion3InpaintPipeline",
341+
"StableDiffusion3PAGImg2ImgPipeline",
341342
"StableDiffusion3PAGPipeline",
342343
"StableDiffusion3Pipeline",
343344
"StableDiffusionAdapterPipeline",
@@ -807,6 +808,7 @@
807808
StableDiffusion3ControlNetPipeline,
808809
StableDiffusion3Img2ImgPipeline,
809810
StableDiffusion3InpaintPipeline,
811+
StableDiffusion3PAGImg2ImgPipeline,
810812
StableDiffusion3PAGPipeline,
811813
StableDiffusion3Pipeline,
812814
StableDiffusionAdapterPipeline,

src/diffusers/models/attention_processor.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1171,6 +1171,7 @@ def __call__(
11711171
attn: Attention,
11721172
hidden_states: torch.FloatTensor,
11731173
encoder_hidden_states: torch.FloatTensor = None,
1174+
attention_mask: Optional[torch.FloatTensor] = None,
11741175
) -> torch.FloatTensor:
11751176
residual = hidden_states
11761177

src/diffusers/models/autoencoders/autoencoder_kl_temporal_decoder.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@
1111
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
1212
# See the License for the specific language governing permissions and
1313
# limitations under the License.
14+
import itertools
1415
from typing import Dict, Optional, Tuple, Union
1516

1617
import torch
@@ -94,7 +95,7 @@ def forward(
9495

9596
sample = self.conv_in(sample)
9697

97-
upscale_dtype = next(iter(self.up_blocks.parameters())).dtype
98+
upscale_dtype = next(itertools.chain(self.up_blocks.parameters(), self.up_blocks.buffers())).dtype
9899
if torch.is_grad_enabled() and self.gradient_checkpointing:
99100

100101
def create_custom_forward(module):

0 commit comments

Comments
 (0)