Skip to content
This repository was archived by the owner on Aug 11, 2025. It is now read-only.

Commit 4c59cf5

Browse files
committed
pypi-diffusers: Autospec creation for update from version 0.30.3 to version 0.31.0
0x名無し (1): Dreambooth lora flux bug 3dtensor to 2dtensor (#9653) Ahnjj_DEV (1): Fix some documentation in ./src/diffusers/models/adapter.py (#9591) Anand Kumar (2): [train_custom_diffusion.py] Fix the LR schedulers when `num_train_epochs` is passed in a distributed training env (#9308) [train_instruct_pix2pix.py]Fix the LR schedulers when `num_train_epochs` is passed in a distributed training env (#9316) Anatoly Belikov (1): adapt masked im2im pipeline for SDXL (#7790) Aryan (24): [refactor] CogVideoX followups + tiled decoding support (#9150) [tests] fix broken xformers tests (#9206) AnimateDiff prompt travel (#9231) [docs] Add a note on torchao/quanto benchmarks for CogVideoX and memory-efficient inference (#9296) [core] CogVideoX memory optimizations in VAE encode (#9340) [core] Support VideoToVideo with CogVideoX (#9333) [tests] remove/speedup some low signal tests (#9285) [refactor] move positional embeddings to patch embed layer for CogVideoX (#9263) [core] Freenoise memory improvements (#9262) [docs] AnimateDiff FreeNoise (#9414) Allow max shard size to be specified when saving pipeline (#9440) Remove CogVideoX mentions from single file docs; Test updates (#9444) set max_shard_size to None for pipeline save_pretrained (#9447) [training] CogVideoX Lora (#9302) [refactor] LoRA tests (#9481) [bug] Precedence of operations in VAE should be slicing -> tiling (#9342) [refactor] remove conv_cache from CogVideoX VAE (#9524) [training] CogVideoX-I2V LoRA (#9482) [pipeline] CogVideoX-Fun Control (#9671) [core] improve VAE encode/decode framewise batching (#9684) [tests] fix name and unskip CogI2V integration test (#9683) [refactor] DiffusionPipeline.download (#9557) [CI] pin max torch version to fix CI errors (#9709) `make deps_table_update` to fix CI tests (#9720) Beinsezii (1): Add Lumina T2I Auto Pipe Mapping (#8962) Benjamin Bossan (1): MAINT Permission for GH token in stale.yml (#9427) C (1): [Flux] Optimize guidance creation in flux pipeline by moving it outside the loop (#9153) Charchit Sharma (2): refactor image_processor.py file (#9608) Resolves [BUG] 'GatheredParameters' object is not callable (#9614) Chenyu Li (1): Fix typo in cogvideo pipeline (#9722) Clem (1): fix xlabs FLUX lora conversion typo (#9581) Daniel Socek (1): Fix textual inversion SDXL and add support for 2nd text encoder (#9010) Darren Hsu (1): Support bfloat16 for Upsample2D (#9480) David Steinberg (1): Fix a dead link (#9116) Dhruv Nair (22): Update Video Loading/Export to use `imageio` (#9094) Small improvements for video loading (#9183) Update `is_safetensors_compatible` check (#8991) [CI] Multiple Slow Test fixes. (#9198) [CI] Add `fail-fast=False` to CUDA nightly and slow tests (#9214) Remove M1 runner from Nightly Test (#9193) [Single File] Fix configuring scheduler via legacy kwargs (#9229) [Single File] Support loading Comfy UI Flux checkpoints (#9243) [Single File] Add Flux Pipeline Support (#9244) [CI] Run Fast + Fast GPU Tests on release branches. (#9255) Fix Freenoise for AnimateDiff V3 checkpoint. (#9288) [CI] Update Release Tests (#9274) Fix Flux CLIP prompt embeds repeat for num_images_per_prompt > 1 (#9280) [CI] Update Hub Token on nightly tests (#9318) [CI] More fixes for Fast GPU Tests on main (#9300) [CI] More Fast GPU Test Fixes (#9346) [CI] Add option to dispatch Fast GPU tests on main (#9355) [CI] Update Single file Nightly Tests (#9357) [CI] Quick fix for Cog Video Test (#9373) [CI] Nightly Test Updates (#9380) Fix typos (#9739) is_safetensors_compatible fix (#9741) Dibbla! (1): Errata - fix typo (#9100) Disty0 (1): Custom sampler support for Stable Cascade Decoder (#9132) Eduardo Escobar (1): Enable `load_lora_weights` for `StableDiffusion3InpaintPipeline` (#9330) Elias Rad (1): Docs fix spelling issues (#9219) Eliseu Silva (1): Fix for use_safetensors parameters, allow use of parameter on loading submodels (#9576) (#9587) Fanli Lin (1): [tests] make 2 tests device-agnostic (#9347) Frank (Haofan) Wang (1): Update __init__.py (#9286) G.O.D (1): [bugfix] reduce float value error when adding noise (#9004) GSSun (1): fix IsADirectoryError when running the training code for sd3_dreambooth_lora_16gb.ipynb (#9634) Haruya Ishikawa (1): fix one uncaught deprecation warning for accessing vae_latent_channels in VaeImagePreprocessor (#9372) Igor Filippov (1): [Pipeline] animatediff + vid2vid + controlnet (#9337) Jianqi Pan (1): fix(pipeline): k sampler sigmas device (#9189) Jinzhe Pan (2): [docs] Add xDiT in section optimization (#9365) [docs] Fix xDiT doc image damage (#9655) Jiwook Han (2): Reflect few contributions on `contribution.md` that were not reflected on #8294 (#8938) [doc] Fix some docstrings in `src/diffusers/training_utils.py` (#9606) Jongho Choi (1): [peft] simple update when unscale (#9689) Juan Acevedo (1): Ptxla sd training (#9381) JuanCarlosPi (1): Add PAG support to StableDiffusionControlNetPAGInpaintPipeline (#8875) Lee Penkman (1): Update community_projects.md (#9266) Leo Jiang (3): Fix dtype error for StableDiffusionXL (#9217) Fix the issue on sd3 dreambooth w./w.t. lora training (#9419) Improve the performance and suitable for NPU computing (#9642) Linoy Tsaban (9): [Flux] Dreambooth LoRA training scripts (#9086) [Flux Dreambooth LoRA] - te bug fixes & updates (#9139) [Dreambooth flux] bug fix for dreambooth script (align with dreambooth lora) (#9257) improve README for flux dreambooth lora (#9290) [Flux Dreambooth lora] add latent caching (#9160) [Flux with CFG] add flux pipeline with cfg support (#9445) [SD3 dreambooth-lora training] small updates + bug fixes (#9682) [Flux] Add advanced training script + support textual inversion inference (#9434) [advanced flux lora script] minor updates to readme (#9705) LukeLin (1): [Doc] Fix path and and also import imageio (#9506) M Saqlain (3): [Tests] Improve transformers model test suite coverage - Lumina (#8987) [Tests] Reduce the model size in the lumina test (#8985) Add Differential Diffusion to Kolors (#9423) Marçal Comajoan Cara (1): Replace transformers.deepspeed with transformers.integrations.deepspeed (#9281) Monjoy Narayan Choudhury (1): Add Differential Diffusion to HunyuanDiT. (#9040) Pakkapon Phongthawee (1): make controlnet support interrupt (#9620) PromeAI (1): [examples] add train flux-controlnet scripts in example. (#9324) Robin (1): [Fix] when run load pretain with local_files_only, local variable 'cached_folder' referenced before assignment (#9376) Ryan Lin (1): Flux - soft inpainting via differential diffusion (#9268) SahilCarterr (2): add PAG support for SD Img2Img (#9463) Added Lora Support to SD3 Img2Img Pipeline (#9659) Sangwon Lee (1): Fix StableDiffusionXLPAGInpaintPipeline (#9128) Sayak Paul (39): Update README.md to include InstantID (#8770) Update distributed_inference.md to include a fuller example on distributed inference (#9152) feat: allow flux transformer to be sharded during inference (#9159) [Chore] add set_default_attn_processor to pixart. (#9196) feat: allow sharding for auraflow. (#8853) [Core] Tear apart `from_pretrained()` of `DiffusionPipeline` (#8967) [Flux LoRA] support parsing alpha from a flux lora state dict. (#9236) [Core] fuse_qkv_projection() to Flux (#9185) [LoRA] support kohya and xlabs loras for flux. (#9295) chore: add a cleaning utility to be useful during training. (#9240) modify benchmarks to replace sdv1.5 with dreamshaper. (#9334) [Tests] fix some fast gpu tests. (#9379) [CI] update artifact uploader version (#9426) [LoRA] fix adapter movement when using DoRA. (#9411) [CI] make runner_type restricted. (#9441) [CI] updates to the CI report naming, and `accelerate` installation (#9429) [Flux] add lora integration tests. (#9353) [CI] fix nightly model tests (#9483) [Cog] some minor fixes and nits (#9466) [CI] allow faster downloads from the Hub in CI. (#9478) [Community Pipeline] Batched implementation of Flux with CFG (#9513) [LoRA] make set_adapters() method more robust. (#9535) [Tests] [LoRA] clean up the serialization stuff. (#9512) [Core] fix variant-identification. (#9253) [chore] fix: retain memory utility. (#9543) [LoRA] support Kohya Flux LoRAs that have text encoders as well (#9542) [Chore] add a note on the versions in Flux LoRA integration tests (#9598) Update distributed_inference.md to include `transformer.device_map` (#9553) [LoRA] Handle DoRA better (#9547) [LoRA] allow loras to be loaded with low_cpu_mem_usage. (#9510) [LoRA] fix dora test to catch the warning properly. (#9627) [CI] replace ubuntu version to 22.04. (#9656) [Tests] increase transformers version in `test_low_cpu_mem_usage_with_loading` (#9662) [Chore] fix import of EntryNotFoundError. (#9676) [LoRA] log a warning when there are missing keys in the LoRA loading. (#9622) [Docker] pin torch versions in the dockerfiles. (#9721) [Quantization] Add quantization support for `bitsandbytes` (#9213) [Docs] docs to xlabs controlnets. (#9688) [bitsandbbytes] follow-ups (#9730) Seongbin Lim (1): Allow DDPMPipeline half precision (#9222) Simo Ryu (1): Add Learned PE selection for Auraflow (#9182) Steven Liu (5): [docs] Organize model toctree (#9118) [docs] Resolve internal links to PEFT (#9144) [docs] Network alpha docstring (#9238) [docs] Add pipelines to table (#9282) [docs] Model sharding (#9521) Subho Ghosh (2): Feature flux controlnet img2img and inpaint pipeline (#9408) flux controlnet control_guidance_start and control_guidance_end implement (#9571) Tolga Cangöz (4): [`Docs`] Fix CPU offloading usage (#9207) Update `UNet2DConditionModel`'s error messages (#9230) [`Community Pipeline`] Add 🪆Matryoshka Diffusion Models (#9157) Fix `schedule_shifted_power` usage in 🪆Matryoshka Diffusion Models (#9723) Vinh H. Pham (1): StableDiffusionLatentUpscalePipeline - positive/negative prompt embeds support (#8947) Vishnu V Jaddipal (3): Fix ```from_single_file``` for xl_inpaint (#9054) Xlabs lora fix (#9348) Add Flux inpainting and Flux Img2Img (#9135) Vladimir Mandic (1): Several fixes to Flux ControlNet pipelines (#9472) Wenlong Wu (1): Add loading text inversion (#9130) Xiangchendong (1): fix cogvideox autoencoder decode (#9569) YiYi Xu (18): fix autopipeline for kolors img2img (#9212) fix a regression in `is_safetensors_compatible` (#9234) Flux followup (#9074) fix _identify_model_variants (#9247) refactor 3d rope for cogvideox (#9269) rotary embedding refactor 2: update comments, fix dtype for use_real=False (#9312) refactor rotary embedding 3: so it is not on cpu (#9307) update runway repo for single_file (#9323) small update on rotary embedding (#9354) add flux inpaint + img2img + controlnet to auto pipeline (#9367) refactor `get_timesteps` for SDXL img2img + add set_begin_index (#9375) a few fix for SingleFile tests (#9522) update get_parameter_dtype (#9526) flux controlnet fix (control_modes batch & others) (#9507) [sd3] make sure height and size are divisible by `16` (#9573) [authored by @Anghellia) Add support of Xlabs Controlnets #9638 (#9687) minor doc/test update (#9734) fix singlestep dpm tests (#9716) Yijun Lee (2): refac: docstrings in import_utils.py (#9583) refac/pipeline_output (#9582) Yu Zheng (2): [examples] add controlnet sd3 example (#9249) Update sd3 controlnet example (#9735) Yuxuan.Zhang (2): CogVideoX-5b-I2V support (#9418) CogView3Plus DiT (#9570) Zoltan (1): Add vae slicing and tiling to flux pipeline (#9122) apolinário (1): Change default for `guidance_scale`in FLUX (#9305) asfiyab-nvidia (1): FluxPosEmbed: Remove Squeeze No-op (#9409) bonlime (1): Fix bug in Textual Inversion Unloading (#9304) captainzz (3): fix from_transformer() with extra conditioning channels (#9364) fix bugs for sd3 controlnet training (#9489) fix vae dtype when accelerate config using --mixed_precision="fp16" (#9601) dependabot[bot] (2): Bump torch from 2.0.1 to 2.2.0 in /examples/research_projects/realfill (#8971) Bump jinja2 from 3.1.3 to 3.1.4 in /examples/research_projects/realfill (#7873) dianyo (1): Migrate the BrownianTree to BrownianInterval in DPM solver (#9335) glide-the (2): fix: CogVideox train dataset _preprocess_data crop video (#9574) Docs: CogVideoX (#9578) hlky (12): [Schedulers] Add exponential sigmas / exponential noise schedule (#9499) Add Noise Schedule/Schedule Type to Schedulers Overview documentation (#9504) Add exponential sigmas to other schedulers and update docs (#9518) [Schedulers] Add beta sigmas / beta noise schedule (#9509) Add beta sigmas to other schedulers and update docs (#9538) FluxMultiControlNetModel (#9647) Add pred_original_sample to `if not return_dict` path (#9649) Convert list/tuple of `SD3ControlNetModel` to `SD3MultiControlNetModel` (#9652) Convert list/tuple of `HunyuanDiT2DControlNetModel` to `HunyuanDiT2DMultiControlNetModel` (#9651) Refactor SchedulerOutput and add pred_original_sample in `DPMSolverSDE`, `Heun`, `KDPM2Ancestral` and `KDPM2` (#9650) Slight performance improvement to `Euler`, `EDMEuler`, `FlowMatchHeun`, `KDPM2Ancestral` (#9616) Add prompt scheduling callback to community scripts (#9718) pibbo88 (1): Fix the bug of sd3 controlnet training when using gradient checkpointing. (#9498) sanaka (1): Fix the bug that `joint_attention_kwargs` is not passed to the FLUX's transformer attention processors (#9517) satani99 (1): Add StableDiffusionXLControlNetPAGImg2ImgPipeline (#8990) sayakpaul (1): Release: v0.31.0 sayantan sadhu (1): fix for lr scheduler in distributed training (#9103) suzukimain (1): [docs] Replace runwayml/stable-diffusion-v1-5 with Lykon/dreamshaper-8 (#9428) timdalxx (1): [docs] add docstrings in `pipline_stable_diffusion.py` (#9590) townwish4git (1): fix(sd3): fix deletion of text_encoders etc (#8951) v2ray (2): [Doc] Improved level of clarity for latents_to_rgb. (#9529) Fixed noise_pred_text referenced before assignment. (#9537) wony617 (2): [docs] refactoring docstrings in `community/hd_painter.py` (#9593) [docs] refactoring docstrings in `models/embeddings_flax.py` (#9592) yangpei-comp (1): Bugfix in `pipeline_kandinsky2_2_combined.py`: Image type check mismatch (#9256) zR (1): Cogvideox-5B Model adapter change (#9203) Álvaro Somoza (5): post release 0.30.0 (#9173) [IP Adapter] Fix object has no attribute with image encoder (#9194) [IP Adapter] Fix `cache_dir` and `local_files_only` for image encoder (#9272) [Tests] Fix ChatGLMTokenizer (#9536) [Fix] Using sharded checkpoints with gated repositories (#9737) 林金鹏 (1): Support SD3 controlnet inpainting (#9099) 王奇勋 (2): [FLUX] Support ControlNet (#9126) [Flux] Support Union ControlNet (#9175)
1 parent 24b0621 commit 4c59cf5

File tree

5 files changed

+13
-13
lines changed

5 files changed

+13
-13
lines changed

Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
PKG_NAME := pypi-diffusers
2-
URL = https://files.pythonhosted.org/packages/de/a9/a53a3d0c0a277a5002aa1e625d0e651b2957f901438052d8d47a97703883/diffusers-0.30.3.tar.gz
2+
URL = https://files.pythonhosted.org/packages/78/5d/156acb741303abbee214926804c5f0d09eacd35d05ad942577e996acdac3/diffusers-0.31.0.tar.gz
33
ARCHIVES =
44

55
include ../common/Makefile.common

options.conf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[package]
22
name = pypi-diffusers
3-
url = https://files.pythonhosted.org/packages/de/a9/a53a3d0c0a277a5002aa1e625d0e651b2957f901438052d8d47a97703883/diffusers-0.30.3.tar.gz
3+
url = https://files.pythonhosted.org/packages/78/5d/156acb741303abbee214926804c5f0d09eacd35d05ad942577e996acdac3/diffusers-0.31.0.tar.gz
44
archives =
55
giturl = https://github.com/huggingface/diffusers/
66
domain =

pypi-diffusers.spec

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,14 @@
22
# This file is auto-generated. DO NOT EDIT
33
# Generated by: autospec.py
44
# Using build pattern: pyproject
5-
# autospec version: v19
5+
# autospec version: v20
66
# autospec commit: f35655a
77
#
88
Name : pypi-diffusers
9-
Version : 0.30.3
10-
Release : 58
11-
URL : https://files.pythonhosted.org/packages/de/a9/a53a3d0c0a277a5002aa1e625d0e651b2957f901438052d8d47a97703883/diffusers-0.30.3.tar.gz
12-
Source0 : https://files.pythonhosted.org/packages/de/a9/a53a3d0c0a277a5002aa1e625d0e651b2957f901438052d8d47a97703883/diffusers-0.30.3.tar.gz
9+
Version : 0.31.0
10+
Release : 59
11+
URL : https://files.pythonhosted.org/packages/78/5d/156acb741303abbee214926804c5f0d09eacd35d05ad942577e996acdac3/diffusers-0.31.0.tar.gz
12+
Source0 : https://files.pythonhosted.org/packages/78/5d/156acb741303abbee214926804c5f0d09eacd35d05ad942577e996acdac3/diffusers-0.31.0.tar.gz
1313
Summary : State-of-the-art diffusion in PyTorch and JAX.
1414
Group : Development/Tools
1515
License : Apache-2.0
@@ -73,18 +73,18 @@ python3 components for the pypi-diffusers package.
7373

7474

7575
%prep
76-
%setup -q -n diffusers-0.30.3
77-
cd %{_builddir}/diffusers-0.30.3
76+
%setup -q -n diffusers-0.31.0
77+
cd %{_builddir}/diffusers-0.31.0
7878
pushd ..
79-
cp -a diffusers-0.30.3 buildavx2
79+
cp -a diffusers-0.31.0 buildavx2
8080
popd
8181

8282
%build
8383
export http_proxy=http://127.0.0.1:9/
8484
export https_proxy=http://127.0.0.1:9/
8585
export no_proxy=localhost,127.0.0.1,0.0.0.0
8686
export LANG=C.UTF-8
87-
export SOURCE_DATE_EPOCH=1726581482
87+
export SOURCE_DATE_EPOCH=1729626155
8888
export GCC_IGNORE_WERROR=1
8989
export AR=gcc-ar
9090
export RANLIB=gcc-ranlib

release

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
58
1+
59

upstream

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
4eb4eabc73b263ba9be844cf9f3045cfcdbbad56/diffusers-0.30.3.tar.gz
1+
7c719172b1b72c1ff079900e6f4f403da8e1bd90/diffusers-0.31.0.tar.gz

0 commit comments

Comments
 (0)