@@ -66,7 +66,8 @@ The images below are built only with CPU optimizations (GPU acceleration support
66
66
67
67
| Tag(s) | Pytorch | IPEX | Dockerfile |
68
68
| -------------------------- | -------- | ------------ | --------------- |
69
- | ` 2.3.0-pip-base ` , ` latest ` | [ v2.3.0] | [ v2.3.0+cpu] | [ v0.4.0-Beta] |
69
+ | ` 2.4.0-pip-base ` , ` latest ` | [ v2.4.0] | [ v2.4.0+cpu] | [ v0.4.0-Beta] |
70
+ | ` 2.3.0-pip-base ` | [ v2.3.0] | [ v2.3.0+cpu] | [ v0.4.0-Beta] |
70
71
| ` 2.2.0-pip-base ` | [ v2.2.0] | [ v2.2.0+cpu] | [ v0.3.4] |
71
72
| ` 2.1.0-pip-base ` | [ v2.1.0] | [ v2.1.0+cpu] | [ v0.2.3] |
72
73
| ` 2.0.0-pip-base ` | [ v2.0.0] | [ v2.0.0+cpu] | [ v0.1.0] |
@@ -83,6 +84,7 @@ The images below additionally include [Jupyter Notebook](https://jupyter.org/) s
83
84
84
85
| Tag(s) | Pytorch | IPEX | Dockerfile |
85
86
| ------------------- | -------- | ------------ | --------------- |
87
+ | ` 2.4.0-pip-jupyter ` | [ v2.4.0] | [ v2.4.0+cpu] | [ v0.4.0-Beta] |
86
88
| ` 2.3.0-pip-jupyter ` | [ v2.3.0] | [ v2.3.0+cpu] | [ v0.4.0-Beta] |
87
89
| ` 2.2.0-pip-jupyter ` | [ v2.2.0] | [ v2.2.0+cpu] | [ v0.3.4] |
88
90
| ` 2.1.0-pip-jupyter ` | [ v2.1.0] | [ v2.1.0+cpu] | [ v0.2.3] |
@@ -93,7 +95,7 @@ docker run -it --rm \
93
95
-p 8888:8888 \
94
96
-v $PWD /workspace:/workspace \
95
97
-w /workspace \
96
- intel/intel-extension-for-pytorch:2.3 .0-pip-jupyter
98
+ intel/intel-extension-for-pytorch:2.4 .0-pip-jupyter
97
99
```
98
100
99
101
After running the command above, copy the URL (something like ` http://127.0.0.1:$PORT/?token=*** ` ) into your browser to access the notebook server.
@@ -104,6 +106,7 @@ The images below additionally include [Intel® oneAPI Collective Communications
104
106
105
107
| Tag(s) | Pytorch | IPEX | oneCCL | INC | Dockerfile |
106
108
| --------------------- | -------- | ------------ | -------------------- | --------- | -------------- |
109
+ | ` 2.4.0-pip-multinode ` | [ v2.4.0] | [ v2.4.0+cpu] | [ v2.4.0] [ ccl-v2.4.0 ] | [ v3.0] | [ v0.4.0-Beta] |
107
110
| ` 2.3.0-pip-multinode ` | [ v2.3.0] | [ v2.3.0+cpu] | [ v2.3.0] [ ccl-v2.3.0 ] | [ v2.6] | [ v0.4.0-Beta] |
108
111
| ` 2.2.0-pip-multinode ` | [ v2.2.2] | [ v2.2.0+cpu] | [ v2.2.0] [ ccl-v2.2.0 ] | [ v2.6] | [ v0.4.0-Beta] |
109
112
| ` 2.1.100-pip-mulitnode ` | [ v2.1.2] | [ v2.1.100+cpu] | [ v2.1.0] [ ccl-v2.1.0 ] | [ v2.6] | [ v0.4.0-Beta] |
@@ -186,7 +189,7 @@ To add these files correctly please follow the steps described below.
186
189
-v $PWD /authorized_keys:/etc/ssh/authorized_keys \
187
190
-v $PWD /tests:/workspace/tests \
188
191
-w /workspace \
189
- intel/intel-extension-for-pytorch:2.3 .0-pip-multinode \
192
+ intel/intel-extension-for-pytorch:2.4 .0-pip-multinode \
190
193
bash -c ' /usr/sbin/sshd -D'
191
194
` ` `
192
195
@@ -199,7 +202,7 @@ To add these files correctly please follow the steps described below.
199
202
-v $PWD /tests:/workspace/tests \
200
203
-v $PWD /hostfile:/workspace/hostfile \
201
204
-w /workspace \
202
- intel/intel-extension-for-pytorch:2.3 .0-pip-multinode \
205
+ intel/intel-extension-for-pytorch:2.4 .0-pip-multinode \
203
206
bash -c ' ipexrun cpu --nnodes 2 --nprocs-per-node 1 --master-addr 127.0.0.1 --master-port 3022 /workspace/tests/ipex-resnet50.py --ipex --device cpu --backend ccl'
204
207
` ` `
205
208
@@ -227,7 +230,7 @@ Additionally, if you have a [DeepSpeed* configuration](https://www.deepspeed.ai/
227
230
-v $PWD /hostfile:/workspace/hostfile \
228
231
-v $PWD /ds_config.json:/workspace/ds_config.json \
229
232
-w /workspace \
230
- intel/intel-extension-for-pytorch:2.3 .0-pip-multinode \
233
+ intel/intel-extension-for-pytorch:2.4 .0-pip-multinode \
231
234
bash -c ' deepspeed --launcher IMPI \
232
235
--master_addr 127.0.0.1 --master_port 3022 \
233
236
--deepspeed_config ds_config.json --hostfile /workspace/hostfile \
@@ -240,9 +243,9 @@ Additionally, if you have a [DeepSpeed* configuration](https://www.deepspeed.ai/
240
243
241
244
The image below is an extension of the IPEX Multi-Node Container designed to run Hugging Face Generative AI scripts. The container has the typical installations needed to run and fine tune PyTorch generative text models from Hugging Face. It can be used to run multinode jobs using the same instructions from the [IPEX Multi-Node container](# setup-and-run-ipex-multi-node-container).
242
245
243
- | Tag(s) | Pytorch | IPEX | oneCCL | transformers | Dockerfile |
244
- | --------------------- | -------- | ------------ | -------------------- | --------- | --------------- |
245
- | ` 2.3 .0-pip-multinode-hf-4.41.2 -genai` | [v2.3.1](https://github.com/pytorch/pytorch/releases/tag/v2.3.1) | [v2.3 .0+cpu] | [v2.3 .0][ccl-v2.3 .0] | [v4.41.2] | [v0.4.0-Beta] |
246
+ | Tag(s) | Pytorch | IPEX | oneCCL | HF Transformers | Dockerfile |
247
+ | ------------------------------------- | -------- | ------------ | -------------------- | ------ --------- | --------------- |
248
+ | ` 2.4 .0-pip-multinode-hf-4.44.0 -genai` | [v2.4.0] | [v2.4 .0+cpu] | [v2.4 .0][ccl-v2.4 .0] | [v4.44.0] | [v0.4.0-Beta] |
246
249
247
250
Below is an example that shows single node job with the existing [` finetune.py` ](../workflows/charts/huggingface-llm/scripts/finetune.py) script.
248
251
@@ -251,7 +254,7 @@ Below is an example that shows single node job with the existing [`finetune.py`]
251
254
docker run -it \
252
255
-v $PWD /workflows/charts/huggingface-llm/scripts:/workspace/scripts \
253
256
-w /workspace/scripts \
254
- intel/intel-extension-for-pytorch:2.3 .0-pip-multinode-hf-4.41.2 -genai \
257
+ intel/intel-extension-for-pytorch:2.4 .0-pip-multinode-hf-4.44.0 -genai \
255
258
bash -c ' python finetune.py <script-args>'
256
259
` ` `
257
260
@@ -261,6 +264,7 @@ The images below are [TorchServe*] with CPU Optimizations:
261
264
262
265
| Tag(s) | Pytorch | IPEX | Dockerfile |
263
266
| ------------------- | -------- | ------------ | --------------- |
267
+ | ` 2.4.0-serving-cpu` | [v2.4.0] | [v2.4.0+cpu] | [v0.4.0-Beta] |
264
268
| ` 2.3.0-serving-cpu` | [v2.3.0] | [v2.3.0+cpu] | [v0.4.0-Beta] |
265
269
| ` 2.2.0-serving-cpu` | [v2.2.0] | [v2.2.0+cpu] | [v0.3.4] |
266
270
@@ -272,6 +276,7 @@ The images below are built only with CPU optimizations (GPU acceleration support
272
276
273
277
| Tag(s) | Pytorch | IPEX | Dockerfile |
274
278
| ---------------- | -------- | ------------ | --------------- |
279
+ | ` 2.4.0-idp-base` | [v2.4.0] | [v2.4.0+cpu] | [v0.4.0-Beta] |
275
280
| ` 2.3.0-idp-base` | [v2.3.0] | [v2.3.0+cpu] | [v0.4.0-Beta] |
276
281
| ` 2.2.0-idp-base` | [v2.2.0] | [v2.2.0+cpu] | [v0.3.4] |
277
282
| ` 2.1.0-idp-base` | [v2.1.0] | [v2.1.0+cpu] | [v0.2.3] |
@@ -281,6 +286,7 @@ The images below additionally include [Jupyter Notebook](https://jupyter.org/) s
281
286
282
287
| Tag(s) | Pytorch | IPEX | Dockerfile |
283
288
| ------------------- | -------- | ------------ | --------------- |
289
+ | ` 2.4.0-idp-jupyter` | [v2.4.0] | [v2.4.0+cpu] | [v0.4.0-Beta] |
284
290
| ` 2.3.0-idp-jupyter` | [v2.3.0] | [v2.3.0+cpu] | [v0.4.0-Beta] |
285
291
| ` 2.2.0-idp-jupyter` | [v2.2.0] | [v2.2.0+cpu] | [v0.3.4] |
286
292
| ` 2.1.0-idp-jupyter` | [v2.1.0] | [v2.1.0+cpu] | [v0.2.3] |
@@ -290,6 +296,7 @@ The images below additionally include [Intel® oneAPI Collective Communications
290
296
291
297
| Tag(s) | Pytorch | IPEX | oneCCL | INC | Dockerfile |
292
298
| --------------------- | -------- | ------------ | -------------------- | --------- | --------------- |
299
+ | ` 2.4.0-idp-multinode` | [v2.4.0] | [v2.4.0+cpu] | [v2.4.0][ccl-v2.3.0] | [v3.0] | [v0.4.0-Beta] |
293
300
| ` 2.3.0-idp-multinode` | [v2.3.0] | [v2.3.0+cpu] | [v2.3.0][ccl-v2.3.0] | [v2.6] | [v0.4.0-Beta] |
294
301
| ` 2.2.0-idp-multinode` | [v2.2.0] | [v2.2.0+cpu] | [v2.2.0][ccl-v2.2.0] | [v2.4.1] | [v0.3.4] |
295
302
| ` 2.1.0-idp-mulitnode` | [v2.1.0] | [v2.1.0+cpu] | [v2.1.0][ccl-v2.1.0] | [v2.3.1] | [v0.2.3] |
@@ -380,6 +387,7 @@ It is the image user's responsibility to ensure that any use of The images below
380
387
[v2.1.10+xpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.1.10%2Bxpu
381
388
[v2.0.110+xpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.0.110%2Bxpu
382
389
390
+ [v2.4.0]: https://github.com/pytorch/pytorch/releases/tag/v2.4.0
383
391
[v2.3.0]: https://github.com/pytorch/pytorch/releases/tag/v2.3.0
384
392
[v2.2.2]: https://github.com/pytorch/pytorch/releases/tag/v2.2.2
385
393
[v2.2.0]: https://github.com/pytorch/pytorch/releases/tag/v2.2.0
@@ -388,25 +396,28 @@ It is the image user's responsibility to ensure that any use of The images below
388
396
[v2.0.1]: https://github.com/pytorch/pytorch/releases/tag/v2.0.1
389
397
[v2.0.0]: https://github.com/pytorch/pytorch/releases/tag/v2.0.0
390
398
399
+ [v3.0]: https://github.com/intel/neural-compressor/releases/tag/v3.0
391
400
[v2.6]: https://github.com/intel/neural-compressor/releases/tag/v2.6
392
401
[v2.4.1]: https://github.com/intel/neural-compressor/releases/tag/v2.4.1
393
402
[v2.3.1]: https://github.com/intel/neural-compressor/releases/tag/v2.3.1
394
403
[v2.1.1]: https://github.com/intel/neural-compressor/releases/tag/v2.1.1
395
404
405
+ [v2.4.0+cpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.4.0%2Bcpu
396
406
[v2.3.0+cpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.3.0%2Bcpu
397
407
[v2.2.0+cpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.2.0%2Bcpu
398
408
[v2.1.100+cpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.1.0%2Bcpu
399
409
[v2.1.0+cpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.1.0%2Bcpu
400
410
[v2.0.100+cpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.0.0%2Bcpu
401
411
[v2.0.0+cpu]: https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.0.0%2Bcpu
402
412
413
+ [ccl-v2.4.0]: https://github.com/intel/torch-ccl/releases/tag/v2.4.0%2Bcpu%2Brc0
403
414
[ccl-v2.3.0]: https://github.com/intel/torch-ccl/releases/tag/v2.3.0%2Bcpu
404
415
[ccl-v2.2.0]: https://github.com/intel/torch-ccl/releases/tag/v2.2.0%2Bcpu
405
416
[ccl-v2.1.0]: https://github.com/intel/torch-ccl/releases/tag/v2.1.0%2Bcpu
406
417
[ccl-v2.0.0]: https://github.com/intel/torch-ccl/releases/tag/v2.1.0%2Bcpu
407
418
408
419
<!-- HuggingFace transformers releases -->
409
- [v4.41.2 ]: https://github.com/huggingface/transformers/releases/tag/v4.41.2
420
+ [v4.44.0 ]: https://github.com/huggingface/transformers/releases/tag/v4.44.0
410
421
411
422
[803]: https://dgpu-docs.intel.com/releases/LTS_803.29_20240131.html
412
423
[736]: https://dgpu-docs.intel.com/releases/stable_736_25_20231031.html
0 commit comments