Update dependency timm to v1.0.24#331
Update dependency timm to v1.0.24#331konflux-internal-p02[bot] wants to merge 1 commit intorhoai-3.4from
Conversation
Signed-off-by: konflux-internal-p02 <170854209+konflux-internal-p02[bot]@users.noreply.github.com>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: konflux-internal-p02[bot] The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Hi @konflux-internal-p02[bot]. Thanks for your PR. I'm waiting for a red-hat-data-services member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This PR contains the following updates:
==1.0.15->==1.0.24==1.0.22->==1.0.24Release Notes
huggingface/pytorch-image-models (timm)
v1.0.24Compare Source
Jan 5 & 6, 2025
Dec 30, 2025
dpwee,dwee,dlittle(differential) ViTs with a small boost over previous runstimmvariant of the CSATv2 model at 512x512 & 640x640__init__into a common method that can be externally called viainit_non_persistent_buffers()after meta-device init.Dec 12, 2025
timmMuon impl. Appears more competitive vs AdamW with familiar hparams for image tasks.DiffAttention), add correspondingDiffParallelScalingBlock(for ViT), train some wee vitsLsePlusandSimPoolDropBlock2d(also add support to ByobNet based models)Dec 1, 2025
What's Changed
New Contributors
Full Changelog: huggingface/pytorch-image-models@v1.0.22...v1.0.24
v1.0.23Compare Source
Dec 30, 2025
dpwee,dwee,dlittle(differential) ViTs with a small boost over previous runstimmvariant of the CSATv2 model at 512x512 & 640x640__init__into a common method that can be externally called viainit_non_persistent_buffers()after meta-device init.Dec 12, 2025
timmMuon impl. Appears more competitive vs AdamW with familiar hparams for image tasks.DiffAttention), add correspondingDiffParallelScalingBlock(for ViT), train some wee vitsLsePlusandSimPoolDropBlock2d(also add support to ByobNet based models)Dec 1, 2025
What's Changed
New Contributors
Full Changelog: huggingface/pytorch-image-models@v1.0.22...v1.0.23
v1.0.22Compare Source
Patch release for priority LayerScale initialization regression in 1.0.21
What's Changed
New Contributors
Full Changelog: huggingface/pytorch-image-models@v1.0.21...v1.0.22
v1.0.21Compare Source
Oct 16-20, 2025
nesterov=True) updates if muon not suitable for parameter shape (or excluded via param group flag)adjust_lr_fnns_coefficientstimmWhat's Changed
huggingface_hubintegration by @Wauplin in #2592New Contributors
Full Changelog: huggingface/pytorch-image-models@v1.0.20...v1.0.21
v1.0.20Compare Source
Sept 21, 2025
lvd_1689m->lvd1689mto match (same forsat_493m->sat493m)Sept 17, 2025
timmmodel. ViT support done via the EVA base model w/ a newRotaryEmbeddingDinoV3to match the DINOv3 specific RoPE implWhat's Changed
New Contributors
Full Changelog: huggingface/pytorch-image-models@v1.0.19...v1.0.20
v1.0.19Compare Source
Patch release for Python 3.9 compat break in 1.0.18
July 23, 2025
set_input_size()method to EVA models, used by OpenCLIP 3.0.0 to allow resizing for timm based encoder models.July 21, 2025
eva.py) including EVA, EVA02, Meta PE ViT,timmSBB ViT w/ ROPE, and Naver ROPE-ViT can be now loaded in NaFlexViT whenuse_naflex=Truepassed at model creation timeWhat's Changed
Full Changelog: huggingface/pytorch-image-models@v1.0.17...v1.0.18
v1.0.18Compare Source
July 23, 2025
set_input_size()method to EVA models, used by OpenCLIP 3.0.0 to allow resizing for timm based encoder models.July 21, 2025
eva.py) including EVA, EVA02, Meta PE ViT,timmSBB ViT w/ ROPE, and Naver ROPE-ViT can be now loaded in NaFlexViT whenuse_naflex=Truepassed at model creation timeWhat's Changed
Full Changelog: huggingface/pytorch-image-models@v1.0.17...v1.0.18
v1.0.17Compare Source
July 7, 2025
eva.py, add RotaryEmbeddingMixed module for mixed mode, weights on HuggingFace HubWhat's Changed
New Contributors
Full Changelog: huggingface/pytorch-image-models@v1.0.16...v1.0.17
v1.0.16Compare Source
June 26, 2025
June 23, 2025
forward_intermediatesand fix some checkpointing bugs. Thanks https://github.com/brianhou0208June 5, 2025
vision_transformer.pycan be loaded into the NaFlexVit model by adding theuse_naflex=Trueflag tocreate_modeltrain.pyandvalidate.pyadd the--naflex-loaderarg, must be used with a NaFlexVitpython validate.py /imagenet --amp -j 8 --model vit_base_patch16_224 --model-kwargs use_naflex=True --naflex-loader --naflex-max-seq-len 256--naflex-train-seq-lens'argument specifies which sequence lengths to randomly pick from per batch during training--naflex-max-seq-lenargument sets the target sequence length for validation--model-kwargs enable_patch_interpolator=True --naflex-patch-sizes 12 16 24will enable random patch size selection per-batch w/ interpolation--naflex-loss-scalearg changes loss scaling mode per batch relative to the batch size,timmNaFlex loading changes the batch size for each seq lenMay 28, 2025
timmweightsforward_intermediates()and some additional fixes thanks to https://github.com/brianhou0208forward_intermediates()thanks to https://github.com/brianhou0208local-dir:pretrained schema, can uselocal-dir:/path/to/model/folderfor model name to source model / pretrained cfg & weights Hugging Face Hub models (config.json + weights file) from a local folder.What's Changed
downloadargument from torch_kwargs for torchvisionImageNetclass by @ryan-caesar-ramos in #2486head_dimreference inAttentionRopeclass ofattention.pyby @amorehead in #2519forward_intermediates()by @brianhou0208 in #2501New Contributors
Full Changelog: huggingface/pytorch-image-models@v1.0.15...v1.0.16
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about these updates again.
To execute skipped test pipelines write comment
/ok-to-test.Documentation
Find out how to configure dependency updates in MintMaker documentation or see all available configuration options in Renovate documentation.