Skip to content
This repository was archived by the owner on Aug 11, 2025. It is now read-only.

Commit 559fab8

Browse files
committed
pypi-diffusers: Autospec creation for update from version 0.31.0 to version 0.32.0
Abhipsha Das (1): [Model Card] standardize advanced diffusion training sd15 lora (#7613) Aditya Raj (1): [BUG FIX] [Stable Audio Pipeline] Resolve torch.Tensor.new_zeros() TypeError in function prepare_latents caused by audio_vae_length (#10306) Anand Kumar (1): [Bug fix] "previous_timestep()" in DDPM scheduling compatible with "trailing" and "linspace" options (#9384) Andrés Romero (1): Flux Control(Depth/Canny) + Inpaint (#10192) Aritra Roy Gosthipaty (1): [Guide] Quantize your Diffusion Models with `bnb` (#10012) Aryan (32): [core] Allegro T2V (#9736) Allegro VAE fix (#9811) [core] Mochi T2V (#9769) Make CogVideoX RoPE implementation consistent (#9963) Fix prepare latent image ids and vae sample generators for flux (#9981) Flux Fill, Canny, Depth, Redux (#9985) [docs] Fix CogVideoX table (#10008) Use torch.device instead of current device index for BnB quantizer (#10069) Remove duplicate checks for len(generator) != batch_size when generator is a list (#10134) [Single file] Support `revision` argument when loading single file config (#10168) Flux Control LoRA (#9999) [core] LTX Video (#10021) Test error raised when loading normal and expanding loras together in Flux (#10188) [core] Hunyuan Video (#10136) [core] TorchAO Quantizer (#10009) Fix copied from comment in Mochi lora loader (#10255) [LoRA] Support LTX Video (#10228) [docs] Clarify dtypes for Sana (#10248) [tests] Remove/rename unsupported quantization torchao type (#10263) [tests] Fix broken cuda, nightly and lora tests on main for CogVideoX (#10270) Rename Mochi integration test correctly (#10220) [tests] remove nullop import checks from lora tests (#10273) Hunyuan VAE tiling fixes and transformer docs (#10295) Fix failing lora tests after HunyuanVideo lora (#10307) Add support for sharded models when TorchAO quantization is enabled (#10256) Make tensors in ResNet contiguous for Hunyuan VAE (#10309) Community hosted weights for diffusers format HunyuanVideo weights (#10344) Bump minimum TorchAO version to 0.7.0 (#10293) [tests] Refactor TorchAO serialization fast tests (#10271) Fix failing CogVideoX LoRA fuse test (#10352) Rename LTX blocks and docs title (#10213) [core] LTX Video 0.9.1 (#10330) Bagheera (1): add skip_layers argument to SD3 transformer model class (#9880) Benjamin Paine (2): Fix Progress Bar Updates in SD 1.5 PAG Img2Img pipeline (#9925) Add StableDiffusion3PAGImg2Img Pipeline + Fix SD3 Unconditional PAG (#9932) Bios (1): update StableDiffusion3Img2ImgPipeline.add image size validation (#10166) Biswaroop (1): [Fix] remove setting lr for T5 text encoder when using prodigy in flux dreambooth lora script (#9473) Boseong Jeon (1): Handling mixed precision for dreambooth flux lora training (#9565) Canva (1): Add support for XFormers in SD3 (#8583) ChG (1): fix link in the docs (#10058) DTG (1): Fix some documentation in ./src/diffusers/models/embeddings.py for demo (#9579) Daniel Regado (1): [WIP] SD3.5 IP-Adapter Pipeline Integration (#9987) Darshil Jariwala (1): Add PAG Support for Stable Diffusion Inpaint Pipeline (#9386) Dhruv Nair (18): Improve downloads of sharded variants (#9869) [CI] Unpin torch<2.5 in CI (#9961) Flux latents fix (#9929) [Single File] Fix SD3.5 single file loading (#10077) [Single File] Pass token when fetching interpreted config (#10082) [Single File] Add single file support for AutoencoderDC (#10183) Fix format issue in push_test yml (#10235) [Single File] Add GGUF support (#9964) Fix Mochi Quality Issues (#10033) Fix Doc links in GGUF and Quantization overview docs (#10279) Make zeroing prompt embeds for Mochi Pipeline configurable (#10284) [Single File] Add single file support for Flux Canny, Depth and Fill (#10288) [Single File] Add single file support for Mochi Transformer (#10268) Allow Mochi Transformer to be split across multiple GPUs (#10300) [Single File] Add GGUF support for LTX (#10298) Mochi docs (#9934) [Single File] Add Single File support for HunYuan video (#10320) [Single File] Fix loading (#10349) Dimitri Barbot (2): Update sdxl reference pipeline to latest sdxl pipeline (#9938) Add sdxl controlnet reference community pipeline (#9893) Dorsa Rohani (1): Add Diffusion Policy for Reinforcement Learning (#9824) Eliseu Silva (1): Feature IP Adapter Xformers Attention Processor (#9881) Emmanuel Benazera (1): fix: missing AutoencoderKL lora adapter (#9807) Ethan Smith (1): fix min-snr implementation (#8466) Fanli Lin (3): fix bug in `require_accelerate_version_greater` (#9746) make `pipelines` tests device-agnostic (part1) (#9399) make `pipelines` tests device-agnostic (part2) (#9400) Grant Sherrick (1): Add server example (#9918) Heavenn (1): Modify apply_overlay for inpainting with padding_mask_crop (Inpainting area: "Only Masked") (#8793) Ina (1): [refactor] enhance readability of flux related pipelines (#9711) Ivan Skorokhodov (1): Use parameters + buffers when deciding upscale_dtype (#9882) Jingya HUANG (1): Add a doc for AWS Neuron in Diffusers (#9766) Jonathan Yin (1): Fix Nonetype attribute error when loading multiple Flux loras (#10182) Juan Acevedo (2): Update ptxla training (#9864) add reshape to fix use_memory_efficient_attention in flax (#7918) Junjie (1): Add offload option in flux-control training (#10225) Junsong Chen (4): [DC-AE] Add the official Deep Compression Autoencoder code(32x,64x,128x compression ratio); (#9708) [Sana] Add Sana, including `SanaPipeline`, `SanaPAGPipeline`, `LinearAttentionProcessor`, `Flow-based DPM-sovler` and so on. (#9982) [Sana]add 2K related model for Sana (#10322) [Sana bug] bug fix for 2K model config (#10340) Kaiwen Sheng (1): fix downsample bug in MidResTemporalBlock1D (#10250) Leo Jiang (3): [bugfix] bugfix for npu free memory (#9640) NPU Adaption for FLUX (#9751) Reduce Memory Cost in Flux Training (#9829) Leojc (1): docs: fix a mistake in docstring (#10319) Linoy Tsaban (9): [SD3-5 dreambooth lora] update model cards (#9749) [SD 3.5 Dreambooth LoRA] support configurable training block & layers (#9762) [flux dreambooth lora training] make LoRA target modules configurable + small bug fix (#9646) [advanced flux training] bug fix + reduce memory cost as in #9829 (#9838) [SD3 dreambooth lora] smol fix to checkpoint saving (#9993) [Flux Redux] add prompt & multiple image input (#10056) [community pipeline] Add RF-inversion Flux pipeline (#9816) [community pipeline rf-inversion] - fix example in doc (#10179) [RF inversion community pipeline] add eta_decay (#10199) Lucain (1): Let server decide default repo visibility (#10047) Mehmet Yiğit Özgenç (1): flux controlnet inpaint config bug (#10291) Michael Tkachuk (1): Enabling gradient checkpointing in eval() mode (#9878) Miguel Farinha (1): Allow image resolutions multiple of 8 instead of 64 in SVD pipeline (#6646) Pakkapon Phongthawee (1): add depth controlnet sd3 pre-trained checkpoints to docs (#9937) Parag Ekbote (10): Notebooks for Community Scripts Examples (#9905) Move Wuerstchen Dreambooth to research_projects (#9935) Fixed Nits in Docs and Example Script (#9940) Notebooks for Community Scripts-2 (#9952) Move IP Adapter Scripts to research project (#9960) Notebooks for Community Scripts-3 (#10032) Fixed Nits in Evaluation Docs (#10063) Notebooks for Community Scripts-4 (#10094) Fix Broken Link in Optimization Docs (#10105) Fix Broken Links in ReadMe (#10117) Pauline Bailly-Masson (1): Ci update tpu (#10197) Pedro Cuenca (2): Interpolate fix on cuda for large output tensors (#10067) Don't stale close-to-merge (#10096) Qin Zhou (1): Support pass kwargs to sd3 custom attention processor (#9818) Rachit Shah (1): config attribute not foud error for FluxImagetoImage Pipeline for multi controlnet solved (#9586) Raul Ciotescu (1): adds the pipeline for pixart alpha controlnet (#8857) RogerSinghChugh (1): Refac training utils.py (#9815) SahilCarterr (9): Added Support of Xlabs controlnet to FluxControlNetInpaintPipeline (#9770) Fixes EMAModel "from_pretrained" method (#9779) [Fix] Test of sd3 lora (#9843) Updated _encode_prompt_with_clip and encode_prompt in train_dreamboth_sd3 (#9800) [fix] Replaced shutil.copy with shutil.copyfile (#9885) [FIX] Fix TypeError in DreamBooth SDXL when use_dora is False (#9879) [Fix] Syntax error (#10068) [FIX] Bug in FluxPosEmbed (#10115) Added Error when len(gligen_images ) is not equal to len(gligen_phrases) in StableDiffusionGLIGENTextImagePipeline (#10176) Sam (1): Update pipeline_flux_img2img.py (#9928) Sayak Paul (47): post-release 0.31.0 (#9742) Some minor updates to the nightly and push workflows (#9759) [research_projects] add flux training script with quantization (#9754) [research_projects] Update README.md to include a note about NF5 T5-xxl (#9775) [CI] add new runner for testing (#9699) [training] fixes to the quantization training script and add AdEMAMix optimizer as an option (#9806) [training] use the lr when using 8bit adam. (#9796) [Tests] clean up and refactor gradient checkpointing tests (#9494) [CI] add a big GPU marker to run memory-intensive tests separately on CI (#9691) [LoRA] fix: lora loading when using with a device_mapped model. (#9449) [feat] add `load_lora_adapter()` for compatible models (#9712) [Core] introduce `controlnet` module (#8768) [Flux] reduce explicit device transfers and typecasting in flux. (#9817) [Advanced LoRA v1.5] fix: gradient unscaling problem (#7018) Revert "[Flux] reduce explicit device transfers and typecasting in flux." (#9896) [LoRA] feat: `save_lora_adapter()` (#9862) [LoRA] enable LoRA for Mochi-1 (#9943) [Tests] skip nan lora tests on PyTorch 2.5.1 CPU. (#9975) [Docs] add: missing pipelines from the spec. (#10005) [Mochi-1] ensuring to compute the fourier features in FP32 in Mochi encoder (#10031) [CI] Add quantization (#9832) [tests] refactor vae tests (#9808) [bitsandbytes] allow directly CUDA placements of pipelines loaded with bnb components (#9840) [Tests] fix condition argument in xfail. (#10099) [Tests] xfail incompatible SD configs. (#10127) [LoRA] depcrecate save_attn_procs(). (#10126) [LoRA] add a test to ensure `set_adapters()` and attn kwargs outs match (#10110) [CI] merge peft pr workflow into the main pr workflow. (#10042) [WIP][Training] Flux Control LoRA training script (#10130) [Tests] update always test pipelines list. (#10143) Update sana.md with minor corrections (#10232) [docs] minor stuff to ltx video docs. (#10229) [Docs] add rest of the lora loader mixins to the docs. (#10230) [chore] add contribution note for lawrence. (#10253) [LoRA] feat: lora support for SANA. (#10234) [chore] fix: licensing headers in mochi and ltx (#10275) [chore] fix: reamde -> readme (#10276) [chore] Update README_sana.md to update the default model (#10285) [LoRA] feat: support loading regular Flux LoRAs into Flux Control, and Fill (#10259) [Tests] add integration tests for lora expansion stuff in Flux. (#10318) [Docs] Update ltx_video.md to remove generator from `from_pretrained()` (#10316) [Docs] Update gguf.md to remove generator from the pipeline from_pretrained (#10299) [docs] fix: torchao example. (#10278) [SANA LoRA] sana lora training tests and misc. (#10296) [Tests] QoL improvements to the LoRA test suite (#10304) [LoRA] test fix (#10351) [Tests] Fix more tests sayak (#10359) ScilenceForest (1): Update train_controlnet_flux.py,Fix size mismatch issue in validation (#9679) Shenghai Yuan (1): [LoRA] Support HunyuanVideo (#10254) SkyCol (1): Add prompt about wandb in examples/dreambooth/readme. (#10014) Soof Golan (1): Improve post-processing performance (#10170) Sookwan Han (1): Add new community pipeline for 'Adaptive Mask Inpainting', introduced in [ECCV2024] ComA (#9228) StAlKeR7779 (1): DPM++ third order fixes (#9104) Steven Liu (4): [docs] load_lora_adapter (#10119) [docs] Add missing AttnProcessors (#10246) [docs] delete_adapters() (#10245) [docs] Fix quantization links (#10323) Thien Tran (1): `.from_single_file()` - Add missing `.shape` (#10332) Vahid Askari (1): Fix: Remove duplicated comma in distributed_inference.md (#9868) Vinh H. Pham (1): [Fix] train_dreambooth_lora_flux_advanced ValueError: unexpected save model: <class 'transformers.models.t5.modeling_t5.T5EncoderModel'> (#9777) Xinyuan Zhao (1): Make `time_embed_dim` of `UNet2DModel` changeable (#10262) YiYi Xu (6): Revert "[LoRA] fix: lora loading when using with a device_mapped mode… (#9823) fix controlnet module refactor (#9968) Sd35 controlnet (#10020) fix offloading for sd3.5 controlnets (#10072) pass attn mask arg for flux (#10122) update `get_parameter_dtype` (#10342) Yu Zheng (1): support sd3.5 for controlnet example (#9860) Yuxuan.Zhang (1): CogVideoX 1.5 (#9877) Zhiyang Shen (1): [Docs] fix docstring typo in SD3 pipeline (#9765) _ (1): Correct pipeline_output.py to the type Mochi (#9945) aihao (1): update (#7067) cjkangme (2): [Community Pipeline] Add some feature for regional prompting pipeline (#9874) [Community Pipeline] Fix typo that cause error on regional prompting pipeline (#10251) dg845 (1): Enable Gradient Checkpointing for UNet2DModel (New) (#7201) djm (1): unet's `sample_size` attribute is to accept tuple(h, w) in `StableDiffusionPipeline` (#10181) fancy45daddy (2): add torch_xla support in pipeline_stable_audio.py (#10109) Update pipeline_controlnet.py add support for pytorch_xla (#10222) hlky (35): Fix beta and exponential sigmas + add tests (#9954) ControlNet from_single_file when already converted (#9978) Add `beta`, `exponential` and `karras` sigmas to `FlowMatchEulerDiscreteScheduler` (#10001) Add `sigmas` to Flux pipelines (#10081) Fix `num_images_per_prompt>1` with Skip Guidance Layers in `StableDiffusion3Pipeline` (#10086) Convert `sigmas` to `np.array` in FlowMatch set_timesteps (#10088) Fix multi-prompt inference (#10103) Test `skip_guidance_layers` in SD3 pipeline (#10102) Fix `pipeline_stable_audio` formating (#10114) Add `sigmas` to pipelines using FlowMatch (#10116) Use `torch` in `get_3d_rotary_pos_embed`/`_allegro` (#10161) Add ControlNetUnion (#10131) Remove `negative_*` from SDXL callback (#10203) refactor StableDiffusionXLControlNetUnion (#10200) Use `torch` in `get_2d_sincos_pos_embed` and `get_3d_sincos_pos_embed` (#10156) Use `t` instead of `timestep` in `_apply_perturbed_attention_guidance` (#10243) Add `dynamic_shifting` to SD3 (#10236) Fix `use_flow_sigmas` (#10242) Fix ControlNetUnion _callback_tensor_inputs (#10218) Use non-human subject in StableDiffusion3ControlNetPipeline example (#10214) Add enable_vae_tiling to AllegroPipeline, fix example (#10212) Fix checkpoint in CogView3PlusPipeline example (#10211) Fix RePaint Scheduler (#10185) Add ControlNetUnion to AutoPipeline from_pretrained (#10219) Add `set_shift` to FlowMatchEulerDiscreteScheduler (#10269) Use `torch` in `get_2d_rotary_pos_embed` (#10155) Fix sigma_last with use_flow_sigmas (#10267) Add Flux Control to AutoPipeline (#10292) Check correct model type is passed to `from_pretrained` (#10189) Fix `local_files_only` for checkpoints with shards (#10294) Fix push_tests_mps.yml (#10326) Fix EMAModel test_from_pretrained (#10325) Support Flux IP Adapter (#10261) Fix enable_sequential_cpu_offload in test_kandinsky_combined (#10324) Fix FluxIPAdapterTesterMixin (#10354) linjiapro (2): Improve control net block index for sd3 (#9758) Fix a bug for SD35 control net training and improve control net block index (#10065) lsb (1): Avoid compiling a progress bar. (#10098) raulmosa (1): Update handle single blocks on _convert_xlabs_flux_lora_to_diffusers (#9915) sayakpaul (1): Release: v0.32.0 skotapati (1): Remove mps workaround for fp16 GELU, which is now supported natively (#10133) suzukimain (1): [community] Load Models from Sources like `Civitai` into Existing Pipelines (#9986) zhangp365 (2): Fix a bug in the state dict judgment in ip_adapter.py. (#10095) fixed a dtype bfloat16 bug in torch_utils.py (#10125) Álvaro Somoza (2): [Official callbacks] SDXL Controlnet CFG Cutoff (#9311) Change image_gen_aux repository URL (#10048) ちくわぶ (1): Add all AttnProcessor classes in `AttentionProcessor` type (#9909) 赵三石 (1): Update lora_conversion_utils.py (#9980) 高佳宝 (1): Update ip_adapter.py (#8882)
1 parent 67ab204 commit 559fab8

File tree

5 files changed

+15
-14
lines changed

5 files changed

+15
-14
lines changed

Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
PKG_NAME := pypi-diffusers
2-
URL = https://files.pythonhosted.org/packages/78/5d/156acb741303abbee214926804c5f0d09eacd35d05ad942577e996acdac3/diffusers-0.31.0.tar.gz
2+
URL = https://files.pythonhosted.org/packages/12/fa/48b5be99873a1e5916663c0baab408cb5b74b0a060854e5ff06b54b7630c/diffusers-0.32.0.tar.gz
33
ARCHIVES =
44

55
include ../common/Makefile.common

options.conf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[package]
22
name = pypi-diffusers
3-
url = https://files.pythonhosted.org/packages/78/5d/156acb741303abbee214926804c5f0d09eacd35d05ad942577e996acdac3/diffusers-0.31.0.tar.gz
3+
url = https://files.pythonhosted.org/packages/12/fa/48b5be99873a1e5916663c0baab408cb5b74b0a060854e5ff06b54b7630c/diffusers-0.32.0.tar.gz
44
archives =
55
giturl = https://github.com/huggingface/diffusers/
66
domain =

pypi-diffusers.spec

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,14 @@
22
# This file is auto-generated. DO NOT EDIT
33
# Generated by: autospec.py
44
# Using build pattern: pyproject
5-
# autospec version: v20
6-
# autospec commit: f35655a
5+
# autospec version: v21
6+
# autospec commit: 5424026
77
#
88
Name : pypi-diffusers
9-
Version : 0.31.0
10-
Release : 60
11-
URL : https://files.pythonhosted.org/packages/78/5d/156acb741303abbee214926804c5f0d09eacd35d05ad942577e996acdac3/diffusers-0.31.0.tar.gz
12-
Source0 : https://files.pythonhosted.org/packages/78/5d/156acb741303abbee214926804c5f0d09eacd35d05ad942577e996acdac3/diffusers-0.31.0.tar.gz
9+
Version : 0.32.0
10+
Release : 61
11+
URL : https://files.pythonhosted.org/packages/12/fa/48b5be99873a1e5916663c0baab408cb5b74b0a060854e5ff06b54b7630c/diffusers-0.32.0.tar.gz
12+
Source0 : https://files.pythonhosted.org/packages/12/fa/48b5be99873a1e5916663c0baab408cb5b74b0a060854e5ff06b54b7630c/diffusers-0.32.0.tar.gz
1313
Summary : State-of-the-art diffusion in PyTorch and JAX.
1414
Group : Development/Tools
1515
License : Apache-2.0
@@ -73,18 +73,18 @@ python3 components for the pypi-diffusers package.
7373

7474

7575
%prep
76-
%setup -q -n diffusers-0.31.0
77-
cd %{_builddir}/diffusers-0.31.0
76+
%setup -q -n diffusers-0.32.0
77+
cd %{_builddir}/diffusers-0.32.0
7878
pushd ..
79-
cp -a diffusers-0.31.0 buildavx2
79+
cp -a diffusers-0.32.0 buildavx2
8080
popd
8181

8282
%build
8383
export http_proxy=http://127.0.0.1:9/
8484
export https_proxy=http://127.0.0.1:9/
8585
export no_proxy=localhost,127.0.0.1,0.0.0.0
8686
export LANG=C.UTF-8
87-
export SOURCE_DATE_EPOCH=1729626155
87+
export SOURCE_DATE_EPOCH=1735095270
8888
export GCC_IGNORE_WERROR=1
8989
export AR=gcc-ar
9090
export RANLIB=gcc-ranlib
@@ -101,6 +101,7 @@ ASFLAGS="$CLEAR_INTERMEDIATE_ASFLAGS"
101101
LDFLAGS="$CLEAR_INTERMEDIATE_LDFLAGS"
102102
export MAKEFLAGS=%{?_smp_mflags}
103103
python3 -m build --wheel --skip-dependency-check --no-isolation
104+
104105
pushd ../buildavx2/
105106
CFLAGS="$CLEAR_INTERMEDIATE_CFLAGS -march=x86-64-v3 -Wl,-z,x86-64-v3 "
106107
CXXFLAGS="$CLEAR_INTERMEDIATE_CXXFLAGS -march=x86-64-v3 -Wl,-z,x86-64-v3 "

release

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
60
1+
61

upstream

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
7c719172b1b72c1ff079900e6f4f403da8e1bd90/diffusers-0.31.0.tar.gz
1+
219e0350c36f358fb851a23b1d3e0beda843ce72/diffusers-0.32.0.tar.gz

0 commit comments

Comments
 (0)