Skip to content

Commit a8df402

Browse files
Bump transformers 4.53.0 (#3618)
### Changes Bump packages ``` transformers==4.53.0 optimum-intel==1.26.0 optimum==2.0.0 ``` ### Tests WC: https://github.com/openvinotoolkit/nncf/actions/runs/18844019917 Examples; https://github.com/openvinotoolkit/nncf/actions/runs/18905470590 PTQ: manual/job/post_training_quantization/741/ Nightly Torch: nightly/job/torch_nightly/693/
1 parent 98515cb commit a8df402

File tree

23 files changed

+648
-629
lines changed

23 files changed

+648
-629
lines changed

examples/llm_compression/onnx/tiny_llama/main.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,11 @@
2727
MODEL_ID = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
2828
OUTPUT_DIR = ROOT / "tinyllama_compressed"
2929

30+
# TODO(AlexanderDokuchaev): WA for https://github.com/huggingface/optimum-intel/issues/1498
31+
from optimum.exporters.tasks import TasksManager # noqa: E402
32+
33+
TasksManager._TRANSFORMERS_TASKS_TO_MODEL_LOADERS["image-text-to-text"] = "AutoModelForImageTextToText"
34+
3035

3136
def main():
3237
# Export the pretrained model in ONNX format. The OUTPUT_DIR directory
Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
1-
transformers==4.52.1
1+
transformers==4.53.0
22
openvino==2025.3.0
3-
optimum-intel[openvino]
4-
git+https://github.com/onnx/onnx.git@c25eebcf51b781dbfcc75a9c8bdf5dd1781367fe # onnx-1.19.0.dev
3+
optimum-intel[openvino]==1.26.0
4+
optimum-onnx==0.0.3
5+
optimum==2.0.0
6+
onnx==1.19.1
57
onnxruntime==1.21.1
6-
torch==2.8.0
8+
torch==2.9.0

examples/llm_compression/onnx/tiny_llama_scale_estimation/main.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,11 @@
3434
warnings.filterwarnings("ignore", category=torch.jit.TracerWarning)
3535
warnings.filterwarnings("ignore", category=OnnxExporterWarning)
3636

37+
# TODO(AlexanderDokuchaev): WA for https://github.com/huggingface/optimum-intel/issues/1498
38+
from optimum.exporters.tasks import TasksManager # noqa: E402
39+
40+
TasksManager._TRANSFORMERS_TASKS_TO_MODEL_LOADERS["image-text-to-text"] = "AutoModelForImageTextToText"
41+
3742

3843
def tiny_llama_transform_func(
3944
item: dict[str, str], tokenizer: LlamaTokenizerFast, onnx_model: onnx.ModelProto
Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,9 @@
1-
torch==2.8.0
2-
transformers==4.52.1
1+
torch==2.9.0
2+
transformers==4.53.0
33
openvino==2025.3.0
4-
optimum-intel[openvino]
4+
optimum-intel[openvino]==1.26.0
5+
optimum-onnx==0.0.3
6+
optimum==2.0.0
57
onnx==1.17.0
68
onnxruntime==1.21.1
79
datasets==2.14.7
Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
openvino==2025.3.0
2-
optimum-intel[openvino]>=1.22.0
3-
transformers==4.52.1
2+
optimum-intel[openvino]==1.26.0
3+
optimum-onnx==0.0.3
4+
optimum==2.0.0
5+
transformers==4.53.0
46
onnx==1.17.0
5-
torch==2.8.0
6-
torchvision==0.23.0
7+
torch==2.9.0
8+
torchvision==0.24.0
Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
1-
datasets
1+
datasets==4.3.0
22
openvino==2025.3.0
3-
optimum-intel[openvino]>=1.22.0
4-
transformers==4.52.1
3+
optimum-intel[openvino]==1.26.0
4+
optimum-onnx==0.0.3
5+
optimum==2.0.0
6+
transformers==4.53.0
57
onnx==1.17.0
6-
torch==2.8.0
8+
torch==2.9.0
Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
1-
transformers==4.52.1
21
datasets==2.14.7
3-
openvino==2025.3.0
4-
optimum-intel[openvino]>=1.22.0
52
onnx==1.17.0
6-
torch==2.8.0
3+
openvino==2025.3.0
4+
optimum-intel[openvino]==1.26.0
5+
optimum-onnx==0.0.3
6+
optimum==2.0.0
7+
torch==2.9.0
8+
transformers==4.53.0
Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,9 @@
11
whowhatbench @ git+https://github.com/openvinotoolkit/[email protected]#subdirectory=tools/who_what_benchmark
22
numpy==1.26.4
33
openvino==2025.3.0
4-
optimum-intel==1.24.0
5-
transformers==4.52.1
4+
optimum-intel==1.26.0
5+
optimum-onnx==0.0.3
6+
optimum==2.0.0
7+
transformers==4.53.0
68
onnx==1.17.0
79
torch==2.9.0
Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,9 @@
1-
torch==2.8.0
1+
torch==2.9.0
22
datasets==3.0.1
33
numpy>=1.23.5,<2
44
openvino==2025.3.0
5-
optimum-intel>=1.22.0
6-
transformers==4.52.1
5+
optimum-intel==1.26.0
6+
optimum-onnx==0.0.3
7+
optimum==2.0.0
8+
transformers==4.53.0
79
onnx==1.17.0
Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,9 @@
11
tensorboard==2.13.0
2-
torch==2.8.0
2+
torch==2.9.0
33
numpy>=1.23.5,<2
44
openvino==2025.3.0
5-
optimum-intel>=1.22.0
6-
transformers==4.52.1
5+
optimum-intel==1.26.0
6+
optimum-onnx==0.0.3
7+
optimum==2.0.0
8+
transformers==4.53.0
79
lm_eval==0.4.8

0 commit comments

Comments
 (0)