Skip to content

Commit 4dded50

Browse files
dependabot[bot]tylertitsworth
andauthored
Bump the pytorch group across 1 directory with 14 updates (#309)
Signed-off-by: dependabot[bot] <[email protected]> Signed-off-by: tylertitsworth <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: tylertitsworth <[email protected]>
1 parent 9803461 commit 4dded50

File tree

10 files changed

+70
-59
lines changed

10 files changed

+70
-59
lines changed

pytorch/README.md

Lines changed: 21 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,8 @@ The images below are built only with CPU optimizations (GPU acceleration support
6666

6767
| Tag(s) | Pytorch | IPEX | Dockerfile |
6868
| -------------------------- | -------- | ------------ | --------------- |
69-
| `2.3.0-pip-base`, `latest` | [v2.3.0] | [v2.3.0+cpu] | [v0.4.0-Beta] |
69+
| `2.4.0-pip-base`, `latest` | [v2.4.0] | [v2.4.0+cpu] | [v0.4.0-Beta] |
70+
| `2.3.0-pip-base` | [v2.3.0] | [v2.3.0+cpu] | [v0.4.0-Beta] |
7071
| `2.2.0-pip-base` | [v2.2.0] | [v2.2.0+cpu] | [v0.3.4] |
7172
| `2.1.0-pip-base` | [v2.1.0] | [v2.1.0+cpu] | [v0.2.3] |
7273
| `2.0.0-pip-base` | [v2.0.0] | [v2.0.0+cpu] | [v0.1.0] |
@@ -83,6 +84,7 @@ The images below additionally include [Jupyter Notebook](https://jupyter.org/) s
8384

8485
| Tag(s) | Pytorch | IPEX | Dockerfile |
8586
| ------------------- | -------- | ------------ | --------------- |
87+
| `2.4.0-pip-jupyter` | [v2.4.0] | [v2.4.0+cpu] | [v0.4.0-Beta] |
8688
| `2.3.0-pip-jupyter` | [v2.3.0] | [v2.3.0+cpu] | [v0.4.0-Beta] |
8789
| `2.2.0-pip-jupyter` | [v2.2.0] | [v2.2.0+cpu] | [v0.3.4] |
8890
| `2.1.0-pip-jupyter` | [v2.1.0] | [v2.1.0+cpu] | [v0.2.3] |
@@ -93,7 +95,7 @@ docker run -it --rm \
9395
-p 8888:8888 \
9496
-v $PWD/workspace:/workspace \
9597
-w /workspace \
96-
intel/intel-extension-for-pytorch:2.3.0-pip-jupyter
98+
intel/intel-extension-for-pytorch:2.4.0-pip-jupyter
9799
```
98100

99101
After running the command above, copy the URL (something like `http://127.0.0.1:$PORT/?token=***`) into your browser to access the notebook server.
@@ -104,6 +106,7 @@ The images below additionally include [Intel® oneAPI Collective Communications
104106

105107
| Tag(s) | Pytorch | IPEX | oneCCL | INC | Dockerfile |
106108
| --------------------- | -------- | ------------ | -------------------- | --------- | -------------- |
109+
| `2.4.0-pip-multinode` | [v2.4.0] | [v2.4.0+cpu] | [v2.4.0][ccl-v2.4.0] | [v3.0] | [v0.4.0-Beta] |
107110
| `2.3.0-pip-multinode` | [v2.3.0] | [v2.3.0+cpu] | [v2.3.0][ccl-v2.3.0] | [v2.6] | [v0.4.0-Beta] |
108111
| `2.2.0-pip-multinode` | [v2.2.2] | [v2.2.0+cpu] | [v2.2.0][ccl-v2.2.0] | [v2.6] | [v0.4.0-Beta] |
109112
| `2.1.100-pip-mulitnode` | [v2.1.2] | [v2.1.100+cpu] | [v2.1.0][ccl-v2.1.0] | [v2.6] | [v0.4.0-Beta] |
@@ -186,7 +189,7 @@ To add these files correctly please follow the steps described below.
186189
-v $PWD/authorized_keys:/etc/ssh/authorized_keys \
187190
-v $PWD/tests:/workspace/tests \
188191
-w /workspace \
189-
intel/intel-extension-for-pytorch:2.3.0-pip-multinode \
192+
intel/intel-extension-for-pytorch:2.4.0-pip-multinode \
190193
bash -c '/usr/sbin/sshd -D'
191194
```
192195

@@ -199,7 +202,7 @@ To add these files correctly please follow the steps described below.
199202
-v $PWD/tests:/workspace/tests \
200203
-v $PWD/hostfile:/workspace/hostfile \
201204
-w /workspace \
202-
intel/intel-extension-for-pytorch:2.3.0-pip-multinode \
205+
intel/intel-extension-for-pytorch:2.4.0-pip-multinode \
203206
bash -c 'ipexrun cpu --nnodes 2 --nprocs-per-node 1 --master-addr 127.0.0.1 --master-port 3022 /workspace/tests/ipex-resnet50.py --ipex --device cpu --backend ccl'
204207
```
205208

@@ -227,7 +230,7 @@ Additionally, if you have a [DeepSpeed* configuration](https://www.deepspeed.ai/
227230
-v $PWD/hostfile:/workspace/hostfile \
228231
-v $PWD/ds_config.json:/workspace/ds_config.json \
229232
-w /workspace \
230-
intel/intel-extension-for-pytorch:2.3.0-pip-multinode \
233+
intel/intel-extension-for-pytorch:2.4.0-pip-multinode \
231234
bash -c 'deepspeed --launcher IMPI \
232235
--master_addr 127.0.0.1 --master_port 3022 \
233236
--deepspeed_config ds_config.json --hostfile /workspace/hostfile \
@@ -240,9 +243,9 @@ Additionally, if you have a [DeepSpeed* configuration](https://www.deepspeed.ai/
240243

241244
The image below is an extension of the IPEX Multi-Node Container designed to run Hugging Face Generative AI scripts. The container has the typical installations needed to run and fine tune PyTorch generative text models from Hugging Face. It can be used to run multinode jobs using the same instructions from the [IPEX Multi-Node container](#setup-and-run-ipex-multi-node-container).
242245

243-
| Tag(s) | Pytorch | IPEX | oneCCL | transformers | Dockerfile |
244-
| --------------------- | -------- | ------------ | -------------------- | --------- | --------------- |
245-
| `2.3.0-pip-multinode-hf-4.41.2-genai` | [v2.3.1](https://github.com/pytorch/pytorch/releases/tag/v2.3.1) | [v2.3.0+cpu] | [v2.3.0][ccl-v2.3.0] | [v4.41.2] | [v0.4.0-Beta] |
246+
| Tag(s) | Pytorch | IPEX | oneCCL | HF Transformers | Dockerfile |
247+
| ------------------------------------- | -------- | ------------ | -------------------- | --------------- | --------------- |
248+
| `2.4.0-pip-multinode-hf-4.44.0-genai` | [v2.4.0] | [v2.4.0+cpu] | [v2.4.0][ccl-v2.4.0] | [v4.44.0] | [v0.4.0-Beta] |
246249

247250
Below is an example that shows single node job with the existing [`finetune.py`](../workflows/charts/huggingface-llm/scripts/finetune.py) script.
248251

@@ -251,7 +254,7 @@ Below is an example that shows single node job with the existing [`finetune.py`]
251254
docker run -it \
252255
-v $PWD/workflows/charts/huggingface-llm/scripts:/workspace/scripts \
253256
-w /workspace/scripts \
254-
intel/intel-extension-for-pytorch:2.3.0-pip-multinode-hf-4.41.2-genai \
257+
intel/intel-extension-for-pytorch:2.4.0-pip-multinode-hf-4.44.0-genai \
255258
bash -c 'python finetune.py <script-args>'
256259
```
257260

@@ -261,6 +264,7 @@ The images below are [TorchServe*] with CPU Optimizations:
261264

262265
| Tag(s) | Pytorch | IPEX | Dockerfile |
263266
| ------------------- | -------- | ------------ | --------------- |
267+
| `2.4.0-serving-cpu` | [v2.4.0] | [v2.4.0+cpu] | [v0.4.0-Beta] |
264268
| `2.3.0-serving-cpu` | [v2.3.0] | [v2.3.0+cpu] | [v0.4.0-Beta] |
265269
| `2.2.0-serving-cpu` | [v2.2.0] | [v2.2.0+cpu] | [v0.3.4] |
266270

@@ -272,6 +276,7 @@ The images below are built only with CPU optimizations (GPU acceleration support
272276

273277
| Tag(s) | Pytorch | IPEX | Dockerfile |
274278
| ---------------- | -------- | ------------ | --------------- |
279+
| `2.4.0-idp-base` | [v2.4.0] | [v2.4.0+cpu] | [v0.4.0-Beta] |
275280
| `2.3.0-idp-base` | [v2.3.0] | [v2.3.0+cpu] | [v0.4.0-Beta] |
276281
| `2.2.0-idp-base` | [v2.2.0] | [v2.2.0+cpu] | [v0.3.4] |
277282
| `2.1.0-idp-base` | [v2.1.0] | [v2.1.0+cpu] | [v0.2.3] |
@@ -281,6 +286,7 @@ The images below additionally include [Jupyter Notebook](https://jupyter.org/) s
281286

282287
| Tag(s) | Pytorch | IPEX | Dockerfile |
283288
| ------------------- | -------- | ------------ | --------------- |
289+
| `2.4.0-idp-jupyter` | [v2.4.0] | [v2.4.0+cpu] | [v0.4.0-Beta] |
284290
| `2.3.0-idp-jupyter` | [v2.3.0] | [v2.3.0+cpu] | [v0.4.0-Beta] |
285291
| `2.2.0-idp-jupyter` | [v2.2.0] | [v2.2.0+cpu] | [v0.3.4] |
286292
| `2.1.0-idp-jupyter` | [v2.1.0] | [v2.1.0+cpu] | [v0.2.3] |
@@ -290,6 +296,7 @@ The images below additionally include [Intel® oneAPI Collective Communications
290296

291297
| Tag(s) | Pytorch | IPEX | oneCCL | INC | Dockerfile |
292298
| --------------------- | -------- | ------------ | -------------------- | --------- | --------------- |
299+
| `2.4.0-idp-multinode` | [v2.4.0] | [v2.4.0+cpu] | [v2.4.0][ccl-v2.3.0] | [v3.0] | [v0.4.0-Beta] |
293300
| `2.3.0-idp-multinode` | [v2.3.0] | [v2.3.0+cpu] | [v2.3.0][ccl-v2.3.0] | [v2.6] | [v0.4.0-Beta] |
294301
| `2.2.0-idp-multinode` | [v2.2.0] | [v2.2.0+cpu] | [v2.2.0][ccl-v2.2.0] | [v2.4.1] | [v0.3.4] |
295302
| `2.1.0-idp-mulitnode` | [v2.1.0] | [v2.1.0+cpu] | [v2.1.0][ccl-v2.1.0] | [v2.3.1] | [v0.2.3] |
@@ -380,6 +387,7 @@ It is the image user's responsibility to ensure that any use of The images below
380387
[v2.1.10+xpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.1.10%2Bxpu
381388
[v2.0.110+xpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.0.110%2Bxpu
382389
390+
[v2.4.0]: https://github.com/pytorch/pytorch/releases/tag/v2.4.0
383391
[v2.3.0]: https://github.com/pytorch/pytorch/releases/tag/v2.3.0
384392
[v2.2.2]: https://github.com/pytorch/pytorch/releases/tag/v2.2.2
385393
[v2.2.0]: https://github.com/pytorch/pytorch/releases/tag/v2.2.0
@@ -388,25 +396,28 @@ It is the image user's responsibility to ensure that any use of The images below
388396
[v2.0.1]: https://github.com/pytorch/pytorch/releases/tag/v2.0.1
389397
[v2.0.0]: https://github.com/pytorch/pytorch/releases/tag/v2.0.0
390398
399+
[v3.0]: https://github.com/intel/neural-compressor/releases/tag/v3.0
391400
[v2.6]: https://github.com/intel/neural-compressor/releases/tag/v2.6
392401
[v2.4.1]: https://github.com/intel/neural-compressor/releases/tag/v2.4.1
393402
[v2.3.1]: https://github.com/intel/neural-compressor/releases/tag/v2.3.1
394403
[v2.1.1]: https://github.com/intel/neural-compressor/releases/tag/v2.1.1
395404
405+
[v2.4.0+cpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.4.0%2Bcpu
396406
[v2.3.0+cpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.3.0%2Bcpu
397407
[v2.2.0+cpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.2.0%2Bcpu
398408
[v2.1.100+cpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.1.0%2Bcpu
399409
[v2.1.0+cpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.1.0%2Bcpu
400410
[v2.0.100+cpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.0.0%2Bcpu
401411
[v2.0.0+cpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.0.0%2Bcpu
402412
413+
[ccl-v2.4.0]: https://github.com/intel/torch-ccl/releases/tag/v2.4.0%2Bcpu%2Brc0
403414
[ccl-v2.3.0]: https://github.com/intel/torch-ccl/releases/tag/v2.3.0%2Bcpu
404415
[ccl-v2.2.0]: https://github.com/intel/torch-ccl/releases/tag/v2.2.0%2Bcpu
405416
[ccl-v2.1.0]: https://github.com/intel/torch-ccl/releases/tag/v2.1.0%2Bcpu
406417
[ccl-v2.0.0]: https://github.com/intel/torch-ccl/releases/tag/v2.1.0%2Bcpu
407418
408419
<!-- HuggingFace transformers releases -->
409-
[v4.41.2]: https://github.com/huggingface/transformers/releases/tag/v4.41.2
420+
[v4.44.0]: https://github.com/huggingface/transformers/releases/tag/v4.44.0
410421
411422
[803]: https://dgpu-docs.intel.com/releases/LTS_803.29_20240131.html
412423
[736]: https://dgpu-docs.intel.com/releases/stable_736_25_20231031.html

pytorch/docker-compose.yaml

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -25,16 +25,16 @@ services:
2525
BASE_IMAGE_NAME: ${BASE_IMAGE_NAME:-ubuntu}
2626
BASE_IMAGE_TAG: ${BASE_IMAGE_TAG:-22.04}
2727
GITHUB_RUN_NUMBER: ${GITHUB_RUN_NUMBER:-0}
28-
IPEX_VERSION: ${IPEX_VERSION:-2.3.0}
28+
IPEX_VERSION: ${IPEX_VERSION:-2.4.0}
2929
MINIFORGE_VERSION: ${MINIFORGE_VERSION:-Linux-x86_64}
3030
NO_PROXY: ''
3131
PACKAGE_OPTION: ${PACKAGE_OPTION:-pip}
3232
PYTHON_VERSION: ${PYTHON_VERSION:-3.10}
33-
PYTORCH_VERSION: ${PYTORCH_VERSION:-2.3.0+cpu}
33+
PYTORCH_VERSION: ${PYTORCH_VERSION:-2.4.0+cpu}
3434
REGISTRY: ${REGISTRY}
3535
REPO: ${REPO}
36-
TORCHAUDIO_VERSION: ${TORCHAUDIO_VERSION:-2.3.0+cpu}
37-
TORCHVISION_VERSION: ${TORCHVISION_VERSION:-0.18.0+cpu}
36+
TORCHAUDIO_VERSION: ${TORCHAUDIO_VERSION:-2.4.0}
37+
TORCHVISION_VERSION: ${TORCHVISION_VERSION:-0.19.0}
3838
context: .
3939
labels:
4040
dependency.python: ${PYTHON_VERSION:-3.10}
@@ -43,29 +43,29 @@ services:
4343
org.opencontainers.base.name: "intel/python:3.10-core"
4444
org.opencontainers.image.name: "intel/intel-optimized-pytorch"
4545
org.opencontainers.image.title: "Intel® Extension for PyTorch Base Image"
46-
org.opencontainers.image.version: ${IPEX_VERSION:-2.2.0}-${PACKAGE_OPTION:-pip}-base
46+
org.opencontainers.image.version: ${IPEX_VERSION:-2.4.0}-${PACKAGE_OPTION:-pip}-base
4747
target: ipex-base-${PACKAGE_OPTION:-pip}
4848
command: >
4949
sh -c "python -c 'import torch; import intel_extension_for_pytorch as ipex; print(\"torch:\", torch.__version__, \" ipex:\",ipex.__version__)'"
5050
depends_on:
5151
- ${PACKAGE_OPTION:-pip}
52-
image: ${REGISTRY}/${REPO}:b-${GITHUB_RUN_NUMBER:-0}-${BASE_IMAGE_NAME:-ubuntu}-${BASE_IMAGE_TAG:-22.04}-${PACKAGE_OPTION:-pip}-py${PYTHON_VERSION:-3.10}-ipex-${IPEX_VERSION:-2.3.0}-base
52+
image: ${REGISTRY}/${REPO}:b-${GITHUB_RUN_NUMBER:-0}-${BASE_IMAGE_NAME:-ubuntu}-${BASE_IMAGE_TAG:-22.04}-${PACKAGE_OPTION:-pip}-py${PYTHON_VERSION:-3.10}-ipex-${IPEX_VERSION:-2.4.0}-base
5353
pull_policy: always
5454
jupyter:
5555
build:
5656
labels:
5757
dependency.python.pip: jupyter-requirements.txt
58-
org.opencontainers.base.name: "intel/intel-optimized-pytorch:${IPEX_VERSION:-2.2.0}-${PACKAGE_OPTION:-pip}-base"
58+
org.opencontainers.base.name: "intel/intel-optimized-pytorch:${IPEX_VERSION:-2.4.0}-${PACKAGE_OPTION:-pip}-base"
5959
org.opencontainers.image.title: "Intel® Extension for PyTorch Jupyter Image"
60-
org.opencontainers.image.version: ${IPEX_VERSION:-2.2.0}-${PACKAGE_OPTION:-pip}-jupyter
60+
org.opencontainers.image.version: ${IPEX_VERSION:-2.4.0}-${PACKAGE_OPTION:-pip}-jupyter
6161
target: jupyter
6262
command: >
6363
bash -c "python -m jupyter --version"
6464
environment:
6565
http_proxy: ${http_proxy}
6666
https_proxy: ${https_proxy}
6767
extends: ipex-base
68-
image: ${REGISTRY}/${REPO}:b-${GITHUB_RUN_NUMBER:-0}-${BASE_IMAGE_NAME:-ubuntu}-${BASE_IMAGE_TAG:-22.04}-${PACKAGE_OPTION:-pip}-py${PYTHON_VERSION:-3.10}-ipex-${IPEX_VERSION:-2.3.0}-jupyter
68+
image: ${REGISTRY}/${REPO}:b-${GITHUB_RUN_NUMBER:-0}-${BASE_IMAGE_NAME:-ubuntu}-${BASE_IMAGE_TAG:-22.04}-${PACKAGE_OPTION:-pip}-py${PYTHON_VERSION:-3.10}-ipex-${IPEX_VERSION:-2.4.0}-jupyter
6969
network_mode: host
7070
ports:
7171
- 8888:8888
@@ -79,17 +79,17 @@ services:
7979
dependency.pip.apt.virtualenv: true
8080
dependency.pip.deepspeed: 0.14.4
8181
dependency.python.pip: multinode/requirements.txt
82-
org.opencontainers.base.name: "intel/intel-optimized-pytorch:${IPEX_VERSION:-2.2.0}-${PACKAGE_OPTION:-pip}-base"
82+
org.opencontainers.base.name: "intel/intel-optimized-pytorch:${IPEX_VERSION:-2.4.0}-${PACKAGE_OPTION:-pip}-base"
8383
org.opencontainers.image.title: "Intel® Extension for PyTorch MultiNode Image"
84-
org.opencontainers.image.version: ${IPEX_VERSION:-2.2.0}-${PACKAGE_OPTION:-pip}-multinode
84+
org.opencontainers.image.version: ${IPEX_VERSION:-2.4.0}-${PACKAGE_OPTION:-pip}-multinode
8585
target: multinode
8686
command: >
8787
bash -c "python -c 'import neural_compressor;import oneccl_bindings_for_pytorch as oneccl;import deepspeed;
8888
print(\"Neural Compressor:\", neural_compressor.__version__,
8989
\"\\nOneCCL:\", oneccl.__version__,
9090
\"\\nDeepspeed:\", deepspeed.__version__)'"
9191
extends: ipex-base
92-
image: ${REGISTRY}/${REPO}:b-${GITHUB_RUN_NUMBER:-0}-${BASE_IMAGE_NAME:-ubuntu}-${BASE_IMAGE_TAG:-22.04}-${PACKAGE_OPTION:-pip}-py${PYTHON_VERSION:-3.10}-ipex-${IPEX_VERSION:-2.3.0}-oneccl-inc-${INC_VERSION:-2.6}
92+
image: ${REGISTRY}/${REPO}:b-${GITHUB_RUN_NUMBER:-0}-${BASE_IMAGE_NAME:-ubuntu}-${BASE_IMAGE_TAG:-22.04}-${PACKAGE_OPTION:-pip}-py${PYTHON_VERSION:-3.10}-ipex-${IPEX_VERSION:-2.4.0}-oneccl-inc-${INC_VERSION:-3.0}
9393
shm_size: 2gb
9494
xpu:
9595
build:
@@ -177,7 +177,7 @@ services:
177177
docs: serving
178178
org.opencontainers.base.name: "intel/python:3.10-core"
179179
org.opencontainers.image.title: "Intel® Extension for PyTorch Serving Image"
180-
org.opencontainers.image.version: ${IPEX_VERSION:-2.2.0}-serving-cpu
180+
org.opencontainers.image.version: ${IPEX_VERSION:-2.4.0}-serving-cpu
181181
target: torchserve
182182
command: torchserve --version
183183
entrypoint: ""
@@ -192,14 +192,14 @@ services:
192192
hf-genai:
193193
build:
194194
args:
195-
HF_VERSION: ${HF_VERSION:-4.41.2}
195+
HF_VERSION: ${HF_VERSION:-4.44.0}
196196
labels:
197197
dependency.python.pip: hf-genai-requirements.txt
198-
org.opencontainers.base.name: "intel/intel-optimized-pytorch:${IPEX_VERSION:-2.3.0}-${PACKAGE_OPTION:-pip}-multinode"
198+
org.opencontainers.base.name: "intel/intel-optimized-pytorch:${IPEX_VERSION:-2.4.0}-${PACKAGE_OPTION:-pip}-multinode"
199199
org.opencontainers.image.title: "Intel® Extension for PyTorch MultiNode Huggingface Generative AI Image"
200-
org.opencontainers.image.version: ${IPEX_VERSION:-2.3.0}-${PACKAGE_OPTION:-pip}-multinode-hf-${HF_VERSION:-4.41.2}-genai"
200+
org.opencontainers.image.version: ${IPEX_VERSION:-2.4.0}-${PACKAGE_OPTION:-pip}-multinode-hf-${HF_VERSION:-4.44.0}-genai"
201201
target: hf-genai
202202
extends: ipex-base
203-
image: ${REGISTRY}/${REPO}:b-${GITHUB_RUN_NUMBER:-0}-${BASE_IMAGE_NAME:-ubuntu}-${BASE_IMAGE_TAG:-22.04}-${PACKAGE_OPTION:-pip}-py${PYTHON_VERSION:-3.10}-ipex-${IPEX_VERSION:-2.3.0}-hf-${HF_VERSION:-4.41.2}
203+
image: ${REGISTRY}/${REPO}:b-${GITHUB_RUN_NUMBER:-0}-${BASE_IMAGE_NAME:-ubuntu}-${BASE_IMAGE_TAG:-22.04}-${PACKAGE_OPTION:-pip}-py${PYTHON_VERSION:-3.10}-ipex-${IPEX_VERSION:-2.4.0}-hf-${HF_VERSION:-4.44.0}
204204
command: >
205-
bash -c "python-c 'import transformers; print(transformers.__version__)'"
205+
bash -c "python -c 'import transformers; print(transformers.__version__)'"

pytorch/hf-genai-requirements.txt

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,13 @@
1-
accelerate==0.32.1
2-
datasets==2.20.0
1+
accelerate==0.33.0
2+
datasets==2.21.0
33
einops==0.8.0
44
evaluate==0.4.2
55
onnxruntime-extensions==0.11.0
66
onnxruntime==1.18.1
7-
peft==0.11.1
8-
protobuf==5.27.2
7+
peft==0.12.0
8+
protobuf==5.27.3
99
py-cpuinfo==9.0.0
1010
scikit-learn==1.5.1
1111
SentencePiece==0.2.0
1212
tokenizers==0.19.1
13-
transformers==4.42.4
13+
transformers==4.44.0

pytorch/jupyter-requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
jupyterlab==4.3.0a2
1+
jupyterlab==4.3.0b0
22
jupyterhub==5.1.0
33
notebook==7.3.0a1
44
jupyter-server-proxy>=4.1.2

pytorch/multinode/requirements.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
1-
neural-compressor==2.6
2-
oneccl_bind_pt==2.3.0+cpu
3-
--extra-index-url https://developer.intel.com/ipex-whl-stable-cpu
1+
neural-compressor==3.0
2+
oneccl_bind_pt==2.4.0+cpu
3+
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/
44
oneccl-devel>=2021.13.0 # required to build deepspeed ops
55
mpi4py>=3.1.0 # required to build deepspeed ops

pytorch/requirements.txt

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
torch==2.3.1
2-
torchvision==0.18.1
3-
torchaudio==2.3.1
1+
torch==2.4.0
2+
torchvision==0.19.0
3+
torchaudio==2.4.0
44
-f https://download.pytorch.org/whl/cpu/torch_stable.html
5-
intel_extension_for_pytorch==2.3.100+cpu
6-
--extra-index-url https://developer.intel.com/ipex-whl-stable-cpu
5+
intel_extension_for_pytorch==2.4.0+cpu
6+
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/

pytorch/serving/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Follow the instructions found in the link above depending on whether you are int
1616
curl -O https://download.pytorch.org/models/squeezenet1_1-b8a52dc0.pth
1717
docker run --rm -it \
1818
-v $PWD:/home/model-server \
19-
intel/intel-optimized-pytorch:2.2.0-serving-cpu \
19+
intel/intel-optimized-pytorch:2.4.0-serving-cpu \
2020
torch-model-archiver --model-name squeezenet \
2121
--version 1.0 \
2222
--model-file model-archive/model.py \
@@ -34,7 +34,7 @@ Test Torchserve with the new archived model. The example below is for the squeez
3434
docker run -d --rm --name server \
3535
-v $PWD:/home/model-server/model-store \
3636
--net=host \
37-
intel/intel-optimized-pytorch:2.2.0-serving-cpu
37+
intel/intel-optimized-pytorch:2.4.0-serving-cpu
3838
# Verify that the container has launched successfully
3939
docker logs server
4040
# Attempt to register the model and make an inference request
@@ -87,7 +87,7 @@ As demonstrated in the above example, models must be registered before they can
8787
-v $PWD:/home/model-server/model-store \
8888
-v $PWD/config.properties:/home/model-server/config.properties \
8989
--net=host \
90-
intel/intel-optimized-pytorch:2.2.0-serving-cpu
90+
intel/intel-optimized-pytorch:2.4.0-serving-cpu
9191
# Verify that the container has launched successfully
9292
docker logs server
9393
# Check the models list

0 commit comments

Comments
 (0)