Skip to content

Update dependency timm to v1.0.24#331

Open
konflux-internal-p02[bot] wants to merge 1 commit intorhoai-3.4from
konflux/mintmaker/rhoai-3.4/timm-1.x
Open

Update dependency timm to v1.0.24#331
konflux-internal-p02[bot] wants to merge 1 commit intorhoai-3.4from
konflux/mintmaker/rhoai-3.4/timm-1.x

Conversation

@konflux-internal-p02
Copy link

@konflux-internal-p02 konflux-internal-p02 bot commented Jan 20, 2026

This PR contains the following updates:

Package Change Age Confidence
timm ==1.0.15 -> ==1.0.24 age confidence
timm ==1.0.22 -> ==1.0.24 age confidence

Release Notes

huggingface/pytorch-image-models (timm)

v1.0.24

Compare Source

Jan 5 & 6, 2025
  • Patch Release 1.0.24 (fix for 1.0.23)
  • Add new benchmark result csv files for inference timing on all models w/ RTX Pro 6000, 5090, and 4090 cards w/ PyTorch 2.9.1
  • Fix moved module error in deprecated timm.models.layers import path that impacts legacy imports
  • Release 1.0.23
Dec 30, 2025
Dec 12, 2025
Dec 1, 2025
  • Add lightweight task abstraction, add logits and feature distillation support to train script via new tasks.
  • Remove old APEX AMP support
What's Changed
New Contributors

Full Changelog: huggingface/pytorch-image-models@v1.0.22...v1.0.24

v1.0.23

Compare Source

Dec 30, 2025
Dec 12, 2025
Dec 1, 2025
  • Add lightweight task abstraction, add logits and feature distillation support to train script via new tasks.
  • Remove old APEX AMP support
What's Changed
New Contributors

Full Changelog: huggingface/pytorch-image-models@v1.0.22...v1.0.23

v1.0.22

Compare Source

Patch release for priority LayerScale initialization regression in 1.0.21

What's Changed

New Contributors

Full Changelog: huggingface/pytorch-image-models@v1.0.21...v1.0.22

v1.0.21

Compare Source

Oct 16-20, 2025

  • Add an impl of the Muon optimizer (based on https://github.com/KellerJordan/Muon) with customizations
    • extra flexibility and improved handling for conv weights and fallbacks for weight shapes not suited for orthogonalization
    • small speedup for NS iterations by reducing allocs and using fused (b)add(b)mm ops
    • by default uses AdamW (or NAdamW if nesterov=True) updates if muon not suitable for parameter shape (or excluded via param group flag)
    • like torch impl, select from several LR scale adjustment fns via adjust_lr_fn
    • select from several NS coefficient presets or specify your own via ns_coefficients
  • First 2 steps of 'meta' device model initialization supported
    • Fix several ops that were breaking creation under 'meta' device context
    • Add device & dtype factory kwarg support to all models and modules (anything inherting from nn.Module) in timm
  • License fields added to pretrained cfgs in code
  • Release 1.0.21

What's Changed

New Contributors

Full Changelog: huggingface/pytorch-image-models@v1.0.20...v1.0.21

v1.0.20

Compare Source

Sept 21, 2025

  • Remap DINOv3 ViT weight tags from lvd_1689m -> lvd1689m to match (same for sat_493m -> sat493m)
  • Release 1.0.20

Sept 17, 2025

What's Changed

New Contributors

Full Changelog: huggingface/pytorch-image-models@v1.0.19...v1.0.20

v1.0.19

Compare Source

Patch release for Python 3.9 compat break in 1.0.18

July 23, 2025

  • Add set_input_size() method to EVA models, used by OpenCLIP 3.0.0 to allow resizing for timm based encoder models.
  • Release 1.0.18, needed for PE-Core S & T models in OpenCLIP 3.0.0
  • Fix small typing issue that broke Python 3.9 compat. 1.0.19 patch release.

July 21, 2025

  • ROPE support added to NaFlexViT. All models covered by the EVA base (eva.py) including EVA, EVA02, Meta PE ViT, timm SBB ViT w/ ROPE, and Naver ROPE-ViT can be now loaded in NaFlexViT when use_naflex=True passed at model creation time
  • More Meta PE ViT encoders added, including small/tiny variants, lang variants w/ tiling, and more spatial variants.
  • PatchDropout fixed with NaFlexViT and also w/ EVA models (regression after adding Naver ROPE-ViT)
  • Fix XY order with grid_indexing='xy', impacted non-square image use in 'xy' mode (only ROPE-ViT and PE impacted).

What's Changed

Full Changelog: huggingface/pytorch-image-models@v1.0.17...v1.0.18

v1.0.18

Compare Source

July 23, 2025

  • Add set_input_size() method to EVA models, used by OpenCLIP 3.0.0 to allow resizing for timm based encoder models.
  • Release 1.0.18, needed for PE-Core S & T models in OpenCLIP 3.0.0

July 21, 2025

  • ROPE support added to NaFlexViT. All models covered by the EVA base (eva.py) including EVA, EVA02, Meta PE ViT, timm SBB ViT w/ ROPE, and Naver ROPE-ViT can be now loaded in NaFlexViT when use_naflex=True passed at model creation time
  • More Meta PE ViT encoders added, including small/tiny variants, lang variants w/ tiling, and more spatial variants.
  • PatchDropout fixed with NaFlexViT and also w/ EVA models (regression after adding Naver ROPE-ViT)
  • Fix XY order with grid_indexing='xy', impacted non-square image use in 'xy' mode (only ROPE-ViT and PE impacted).

What's Changed

Full Changelog: huggingface/pytorch-image-models@v1.0.17...v1.0.18

v1.0.17

Compare Source

July 7, 2025

  • MobileNet-v5 backbone tweaks for improved Google Gemma 3n behaviour (to pair with updated official weights)
    • Add stem bias (zero'd in updated weights, compat break with old weights)
    • GELU -> GELU (tanh approx). A minor change to be closer to JAX
  • Add two arguments to layer-decay support, a min scale clamp and 'no optimization' scale threshold
  • Add 'Fp32' LayerNorm, RMSNorm, SimpleNorm variants that can be enabled to force computation of norm in float32
  • Some typing, argument cleanup for norm, norm+act layers done with above
  • Support Naver ROPE-ViT (https://github.com/naver-ai/rope-vit) in eva.py, add RotaryEmbeddingMixed module for mixed mode, weights on HuggingFace Hub
model img_size top1 top5 param_count
vit_large_patch16_rope_mixed_ape_224.naver_in1k 224 84.84 97.122 304.4
vit_large_patch16_rope_mixed_224.naver_in1k 224 84.828 97.116 304.2
vit_large_patch16_rope_ape_224.naver_in1k 224 84.65 97.154 304.37
vit_large_patch16_rope_224.naver_in1k 224 84.648 97.122 304.17
vit_base_patch16_rope_mixed_ape_224.naver_in1k 224 83.894 96.754 86.59
vit_base_patch16_rope_mixed_224.naver_in1k 224 83.804 96.712 86.44
vit_base_patch16_rope_ape_224.naver_in1k 224 83.782 96.61 86.59
vit_base_patch16_rope_224.naver_in1k 224 83.718 96.672 86.43
vit_small_patch16_rope_224.naver_in1k 224 81.23 95.022 21.98
vit_small_patch16_rope_mixed_224.naver_in1k 224 81.216 95.022 21.99
vit_small_patch16_rope_ape_224.naver_in1k 224 81.004 95.016 22.06
vit_small_patch16_rope_mixed_ape_224.naver_in1k 224 80.986 94.976 22.06
  • Some cleanup of ROPE modules, helpers, and FX tracing leaf registration
  • Preparing version 1.0.17 release

What's Changed

New Contributors

Full Changelog: huggingface/pytorch-image-models@v1.0.16...v1.0.17

v1.0.16

Compare Source

June 26, 2025

  • MobileNetV5 backbone (w/ encoder only variant) for Gemma 3n image encoder
  • Version 1.0.16 released

June 23, 2025

  • Add F.grid_sample based 2D and factorized pos embed resize to NaFlexViT. Faster when lots of different sizes (based on example by https://github.com/stas-sl).
  • Further speed up patch embed resample by replacing vmap with matmul (based on snippet by https://github.com/stas-sl).
  • Add 3 initial native aspect NaFlexViT checkpoints created while testing, ImageNet-1k and 3 different pos embed configs w/ same hparams.
Model Top-1 Acc Top-5 Acc Params (M) Eval Seq Len
naflexvit_base_patch16_par_gap.e300_s576_in1k 83.67 96.45 86.63 576
naflexvit_base_patch16_parfac_gap.e300_s576_in1k 83.63 96.41 86.46 576
naflexvit_base_patch16_gap.e300_s576_in1k 83.50 96.46 86.63 576
  • Support gradient checkpointing for forward_intermediates and fix some checkpointing bugs. Thanks https://github.com/brianhou0208
  • Add 'corrected weight decay' (https://arxiv.org/abs/2506.02285) as option to AdamW (legacy), Adopt, Kron, Adafactor (BV), Lamb, LaProp, Lion, NadamW, RmsPropTF, SGDW optimizers
  • Switch PE (perception encoder) ViT models to use native timm weights instead of remapping on the fly
  • Fix cuda stream bug in prefetch loader

June 5, 2025

  • Initial NaFlexVit model code. NaFlexVit is a Vision Transformer with:
    1. Encapsulated embedding and position encoding in a single module
    2. Support for nn.Linear patch embedding on pre-patchified (dictionary) inputs
    3. Support for NaFlex variable aspect, variable resolution (SigLip-2: https://arxiv.org/abs/2502.14786)
    4. Support for FlexiViT variable patch size (https://arxiv.org/abs/2212.08013)
    5. Support for NaViT fractional/factorized position embedding (https://arxiv.org/abs/2307.06304)
  • Existing vit models in vision_transformer.py can be loaded into the NaFlexVit model by adding the use_naflex=True flag to create_model
    • Some native weights coming soon
  • A full NaFlex data pipeline is available that allows training / fine-tuning / evaluating with variable aspect / size images
    • To enable in train.py and validate.py add the --naflex-loader arg, must be used with a NaFlexVit
  • To evaluate an existing (classic) ViT loaded in NaFlexVit model w/ NaFlex data pipe:
    • python validate.py /imagenet --amp -j 8 --model vit_base_patch16_224 --model-kwargs use_naflex=True --naflex-loader --naflex-max-seq-len 256
  • The training has some extra args features worth noting
    • The --naflex-train-seq-lens' argument specifies which sequence lengths to randomly pick from per batch during training
    • The --naflex-max-seq-len argument sets the target sequence length for validation
    • Adding --model-kwargs enable_patch_interpolator=True --naflex-patch-sizes 12 16 24 will enable random patch size selection per-batch w/ interpolation
    • The --naflex-loss-scale arg changes loss scaling mode per batch relative to the batch size, timm NaFlex loading changes the batch size for each seq len

May 28, 2025

What's Changed

New Contributors

Full Changelog: huggingface/pytorch-image-models@v1.0.15...v1.0.16


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about these updates again.


  • If you want to rebase/retry this PR, check this box

To execute skipped test pipelines write comment /ok-to-test.


Documentation

Find out how to configure dependency updates in MintMaker documentation or see all available configuration options in Renovate documentation.

Signed-off-by: konflux-internal-p02 <170854209+konflux-internal-p02[bot]@users.noreply.github.com>
@openshift-ci openshift-ci bot requested review from Jooho and Snomaan6846 January 20, 2026 06:09
@openshift-ci
Copy link

openshift-ci bot commented Jan 20, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: konflux-internal-p02[bot]
Once this PR has been reviewed and has the lgtm label, please assign edwardquarm for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link

openshift-ci bot commented Jan 20, 2026

Hi @konflux-internal-p02[bot]. Thanks for your PR.

I'm waiting for a red-hat-data-services member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants