Skip to content

Commit e1f179e

Browse files
committed
Rebase 0.4.38 (#569)
1 parent bb19e63 commit e1f179e

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

44 files changed

+377383
-9396
lines changed

.github/CODEOWNERS

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
11

2-
# * @agramesh1 @vsanghavi @mdfaijul @akhilgoe @ShengYang1 @bhavani-subramanian @ashraf-bhuiyan
2+
* @agramesh1 @vsanghavi @mdfaijul @akhilgoe @Solaryee @bhavani-subramanian @ashraf-bhuiyan

CONTRIBUTING.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
### License
44

5-
<PROJECT NAME> is licensed under the terms in [LICENSE]<link to license file in repo>. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
5+
Intel® Extension for OpenXLA is licensed under the terms in [LICENSE](LICENSE.txt). By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
66

77
### Sign your work
88

README.md

Lines changed: 20 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77

88
The [OpenXLA](https://github.com/openxla/xla) Project brings together a community of developers and leading AI/ML teams to accelerate ML and address infrastructure fragmentation across ML frameworks and hardware.
99

10-
Intel® Extension for OpenXLA includes PJRT plugin implementation, which seamlessly runs JAX models on Intel GPU. The PJRT API simplified the integration, which allowed the Intel GPU plugin to be developed separately and quickly integrated into JAX. This same PJRT implementation also enables initial Intel GPU support for TensorFlow and PyTorch models with XLA acceleration. Refer to [OpenXLA PJRT Plugin RFC](https://github.com/openxla/community/blob/main/rfcs/20230123-pjrt-plugin.md) for more details.
10+
Intel® Extension for OpenXLA includes PJRT plugin implementation, which seamlessly runs JAX models on Intel GPU. The PJRT API simplifies the integration, enabling the Intel GPU plugin to be developed separately and allowing smooth integration with JAX. This same PJRT implementation also enables initial Intel GPU support for TensorFlow and PyTorch models with XLA acceleration. Refer to [OpenXLA PJRT Plugin RFC](https://github.com/openxla/community/blob/main/rfcs/20230123-pjrt-plugin.md) for more details.
1111

1212
This guide introduces the overview of OpenXLA high level integration structure and demonstrates how to build Intel® Extension for OpenXLA and run JAX example with OpenXLA on Intel GPU. JAX is the first supported front-end.
1313

@@ -26,59 +26,48 @@ This guide introduces the overview of OpenXLA high level integration structure a
2626

2727
Verified Hardware Platforms:
2828

29-
* Intel® Data Center GPU Max Series, Driver Version: [LTS release 2350.125](https://dgpu-docs.intel.com/releases/LTS-release-notes.html)
29+
* Intel® Data Center GPU Max Series, Driver Version: [LTS release 2350.136](https://dgpu-docs.intel.com/releases/LTS-release-notes.html)
3030

31-
* Intel® Data Center GPU Flex Series, Driver Version: [LTS release 2350.125](https://dgpu-docs.intel.com/driver/installation.html)
31+
* Intel® Data Center GPU Flex Series, Driver Version: [LTS release 2350.136](https://dgpu-docs.intel.com/driver/installation.html)
3232

3333
### Software Requirements
3434

3535
* Ubuntu 22.04 (64-bit)
3636
* Intel® Data Center GPU Flex Series
3737
* Ubuntu 22.04, SUSE Linux Enterprise Server(SLES) 15 SP4
3838
* Intel® Data Center GPU Max Series
39-
* [Intel® oneAPI Base Toolkit 2025.0](https://www.intel.com/content/www/us/en/developer/articles/release-notes/intel-oneapi-toolkit-release-notes.html)
40-
* Jax/Jaxlib 0.4.30
41-
* Python 3.9-3.12
39+
* [Intel® Deep Learning Essentials 2025.1](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html?packages=dl-essentials&dl-lin=offline&dl-essentials-os=linux)
40+
* Jax/Jaxlib 0.4.38
41+
* Python 3.9-3.13
4242
* pip 19.0 or later (requires manylinux2014 support)
4343

44-
**NOTE: Since JAX has its own [platform limitation](https://jax.readthedocs.io/en/latest/installation.html#supported-platforms) (Ubuntu 20.04 or later), real software requirements is restricted when works with JAX.**
44+
**NOTE: Since JAX has its own [platform limitation](https://jax.readthedocs.io/en/latest/installation.html#supported-platforms) (Ubuntu 20.04 or later), real software requirements are restricted when working with JAX.**
4545

4646
### Install Intel GPU Drivers
4747

4848
|OS|Intel GPU|Install Intel GPU Driver|
4949
|-|-|-|
50-
|Ubuntu 22.04 |Intel® Data Center GPU Flex Series| Refer to the [Installation Guides](https://dgpu-docs.intel.com/driver/installation.html) for latest driver installation. If install the verified Intel® Data Center GPU Max Series/Intel® Data Center GPU Flex Series [LTS release 2350.125](https://dgpu-docs.intel.com/driver/installation.html), please append the specific version after components, such as `sudo apt-get install intel-opencl-icd==24.45.31740.10-1057~22.04`|
51-
|Ubuntu 22.04, SLES 15 SP4|Intel® Data Center GPU Max Series| Refer to the [Installation Guides](https://dgpu-docs.intel.com/driver/installation.html) for latest driver installation. If install the verified Intel® Data Center GPU Max Series/Intel® Data Center GPU Flex Series [LTS release 2350.125](https://dgpu-docs.intel.com/releases/LTS-release-notes.html), please append the specific version after components, such as `sudo apt-get install intel-opencl-icd==24.45.31740.10-1057~22.04`|
50+
|Ubuntu 22.04 |Intel® Data Center GPU Flex Series| Refer to the [Installation Guides](https://dgpu-docs.intel.com/driver/installation.html) for latest driver installation. If installing the verified Intel® Data Center GPU Max Series/Intel® Data Center GPU Flex Series [LTS release 2350.136](https://dgpu-docs.intel.com/driver/installation.html), please append the specific version after components, such as `sudo apt-get install intel-opencl-icd==24.45.31740.10-1057~22.04`|
51+
|Ubuntu 22.04, SLES 15 SP4|Intel® Data Center GPU Max Series| Refer to the [Installation Guides](https://dgpu-docs.intel.com/driver/installation.html) for latest driver installation. If installing the verified Intel® Data Center GPU Max Series/Intel® Data Center GPU Flex Series [LTS release 2350.136](https://dgpu-docs.intel.com/releases/LTS-release-notes.html), please append the specific version after components, such as `sudo apt-get install intel-opencl-icd==24.45.31740.10-1057~22.04`|
5252

53-
### Install oneAPI Base Toolkit Packages
53+
### Install Intel® Deep Learning Essentials Packages
5454

55-
Need to install components of Intel® oneAPI Base Toolkit:
55+
Need to install components of Intel® Deep Learning Essentials:
5656

5757
* Intel® oneAPI DPC++ Compiler
5858
* Intel® oneAPI Math Kernel Library (oneMKL)
5959

6060
```bash
61-
$ wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/96aa5993-5b22-4a9b-91ab-da679f422594/intel-oneapi-base-toolkit-2025.0.0.885_offline.sh
61+
$ wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/e7705a6d-954d-465c-a5bc-4f820e2e4e90/intel-deep-learning-essentials-2025.0.2.9_offline.sh
6262
# 2 components are necessary: DPC++/C++ Compiler and oneMKL
63-
sudo sh intel-oneapi-base-toolkit-2025.0.0.885_offline.sh
63+
sudo sh ./intel-deep-learning-essentials-2025.0.2.9_offline.sh -a --silent --eula accept
6464

6565
# Source OneAPI env
66-
source /opt/intel/oneapi/compiler/2025.0/env/vars.sh
67-
source /opt/intel/oneapi/mkl/2025.0/env/vars.sh
66+
source /opt/intel/oneapi/compiler/2025.1/env/vars.sh
67+
source /opt/intel/oneapi/mkl/2025.1/env/vars.sh
6868
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/intel/oneapi/umf/latest/lib
6969
```
7070

71-
**Backup**: Recommend to rollback to **Toolkit 2024.1** if meet performance issue. See [Release Notes](https://github.com/intel/intel-extension-for-openxla/releases) for more details.
72-
```bash
73-
wget https://registrationcenter-download.intel.com/akdlm/IRC_NAS/fdc7a2bc-b7a8-47eb-8876-de6201297144/l_BaseKit_p_2024.1.0.596.sh
74-
# 2 components are necessary: DPC++/C++ Compiler and oneMKL
75-
sudo sh l_BaseKit_p_2024.1.0.596.sh
76-
77-
# Source OneAPI env
78-
source /opt/intel/oneapi/compiler/2024.1/env/vars.sh
79-
source /opt/intel/oneapi/mkl/2024.1/env/vars.sh
80-
```
81-
8271
### Install Jax and Jaxlib
8372

8473
```bash
@@ -89,6 +78,7 @@ Please refer to [test/requirements.txt](test/requirements.txt) for the version d
8978
The following table tracks intel-extension-for-openxla versions and compatible versions of `jax` and `jaxlib`. The compatibility between `jax` and `jaxlib` is maintained through JAX. This version restriction will be relaxed over time as the plugin API matures.
9079
|**intel-extension-for-openxla**|**jaxlib**|**jax**|
9180
|:-:|:-:|:-:|
81+
| 0.6.0 | 0.4.38 | 0.4.38 |
9282
| 0.5.0 | 0.4.30 | >= 0.4.30, <= 0.4.31|
9383
| 0.4.0 | 0.4.26 | >= 0.4.26, <= 0.4.27|
9484
| 0.3.0 | 0.4.24 | >= 0.4.24, <= 0.4.27|
@@ -112,7 +102,7 @@ git clone https://github.com/intel/intel-extension-for-openxla.git
112102
./configure # Choose Yes for all.
113103
bazel build //xla/tools/pip_package:build_pip_package
114104
./bazel-bin/xla/tools/pip_package/build_pip_package ./
115-
pip install intel_extension_for_openxla-0.5.0-cp39-cp39-linux_x86_64.whl
105+
pip install intel_extension_for_openxla-0.6.0-cp312-cp312-linux_x86_64.whl
116106
```
117107

118108
**Aditional Build Option**:
@@ -128,7 +118,7 @@ bazel build --override_repository=xla=/path/to/xla //xla/tools/pip_package:build
128118
By default, bazel will automatically search for the required libraries on your system. This eliminates the need for manual configuration in most cases. For more advanced use cases, you can specify a custom location for the libraries using environment variables:
129119

130120
```bash
131-
export MKL_INSTALL_PATH=/opt/intel/oneapi/mkl/2025.0
121+
export MKL_INSTALL_PATH=/opt/intel/oneapi/mkl/2025.1
132122
export L0_INSTALL_PATH=/usr
133123
bazel build //xla/tools/pip_package:build_pip_package
134124
```
@@ -162,7 +152,7 @@ print(lax_conv())
162152
### Reference result
163153

164154
```bash
165-
jax.local_devices(): [xpu(id=0), xpu(id=1)]
155+
jax.local_devices(): [sycl(id=0), sycl(id=1)]
166156
[[[[2.0449753 2.093208 2.1844783 1.9769732 1.5857391 1.6942389]
167157
[1.9218378 2.2862523 2.1549542 1.8367321 1.3978379 1.3860377]
168158
[1.9456574 2.062028 2.0365305 1.901286 1.5255247 1.1421617]
@@ -189,7 +179,7 @@ jax.local_devices(): [xpu(id=0), xpu(id=1)]
189179
2. If there is an error 'version GLIBCXX_3.4.30' not found, upgrade libstdc++ to the latest, for example for conda
190180

191181
```bash
192-
conda install libstdcxx-ng==12.2.0 -c conda-forge
182+
conda install libstdcxx-ng -c conda-forge -y
193183
```
194184

195185
3. If there is an error '/usr/bin/ld: cannot find -lstdc++: No such file or directory' during source build under Ubuntu 22.04, check the selected GCC-toolchain path and the installed libstdc++.so library path, then create symbolic link of the selected GCC-toolchain path to the libstdc++.so path, for example:

WORKSPACE

Lines changed: 52 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ workspace(name = "intel_extension_for_openxla")
33
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
44
load("//third_party:version_check.bzl", "check_bazel_version_at_least")
55

6-
check_bazel_version_at_least("5.3.0")
6+
check_bazel_version_at_least("6.5.0")
77

88
# To update XLA to a new revision,
99
# a) update URL and strip_prefix to the new git commit hash
@@ -14,10 +14,10 @@ http_archive(
1414
name = "xla",
1515
patch_args = ["-p1"],
1616
patches = ["//third_party:openxla.patch"],
17-
sha256 = "083c7281a629647ab2cc32f054afec74893c33e75328783b8085c818f48235ff",
18-
strip_prefix = "xla-79fd5733f99b3c0948d7202bc1bbe1ee3980da5c",
17+
sha256 = "0870fcd86678cae31c56cfc57018f52ceec8e4691472af62c847ade746a0eb13",
18+
strip_prefix = "xla-20a482597b7dd3067b26ca382b88084ee5a21cf7",
1919
urls = [
20-
"https://github.com/openxla/xla/archive/79fd5733f99b3c0948d7202bc1bbe1ee3980da5c.tar.gz",
20+
"https://github.com/openxla/xla/archive/20a482597b7dd3067b26ca382b88084ee5a21cf7.tar.gz",
2121
],
2222
)
2323

@@ -47,6 +47,7 @@ python_init_repositories(
4747
"3.10": "@xla//:requirements_lock_3_10.txt",
4848
"3.11": "@xla//:requirements_lock_3_11.txt",
4949
"3.12": "@xla//:requirements_lock_3_12.txt",
50+
"3.13": "@xla//:requirements_lock_3_13.txt",
5051
},
5152
)
5253

@@ -82,6 +83,53 @@ load("@xla//:workspace0.bzl", "xla_workspace0")
8283

8384
xla_workspace0()
8485

86+
load(
87+
"@xla//third_party/tsl/third_party/gpus/cuda/hermetic:cuda_json_init_repository.bzl",
88+
"cuda_json_init_repository",
89+
)
90+
91+
cuda_json_init_repository()
92+
93+
load(
94+
"@cuda_redist_json//:distributions.bzl",
95+
"CUDA_REDISTRIBUTIONS",
96+
"CUDNN_REDISTRIBUTIONS",
97+
)
98+
load(
99+
"@xla//third_party/tsl/third_party/gpus/cuda/hermetic:cuda_redist_init_repositories.bzl",
100+
"cuda_redist_init_repositories",
101+
"cudnn_redist_init_repository",
102+
)
103+
104+
cuda_redist_init_repositories(
105+
cuda_redistributions = CUDA_REDISTRIBUTIONS,
106+
)
107+
108+
cudnn_redist_init_repository(
109+
cudnn_redistributions = CUDNN_REDISTRIBUTIONS,
110+
)
111+
112+
load(
113+
"@xla//third_party/tsl/third_party/gpus/cuda/hermetic:cuda_configure.bzl",
114+
"cuda_configure",
115+
)
116+
117+
cuda_configure(name = "local_config_cuda")
118+
119+
load(
120+
"@xla//third_party/tsl/third_party/nccl/hermetic:nccl_redist_init_repository.bzl",
121+
"nccl_redist_init_repository",
122+
)
123+
124+
nccl_redist_init_repository()
125+
126+
load(
127+
"@xla//third_party/tsl/third_party/nccl/hermetic:nccl_configure.bzl",
128+
"nccl_configure",
129+
)
130+
131+
nccl_configure(name = "local_config_nccl")
132+
85133
load(
86134
"@bazel_toolchains//repositories:repositories.bzl",
87135
bazel_toolchains_repositories = "repositories",

docs/acc_jax.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,16 @@
11
# Accelerated JAX on Intel GPU
22

33
## Intel® Extension for OpenXLA* plug-in
4-
Intel® Extension for OpenXLA* includes PJRT plugin implementation, which seamlessly runs JAX models on Intel GPU. The PJRT API simplified the integration, which allowed the Intel GPU plugin to be developed separately and quickly integrated into JAX. Refer to [OpenXLA PJRT Plugin RFC](https://github.com/openxla/community/blob/main/rfcs/20230123-pjrt-plugin.md) for more details.
4+
Intel® Extension for OpenXLA includes PJRT plugin implementation, which seamlessly runs JAX models on Intel GPU. The PJRT API simplifies the integration, enabling the Intel GPU plugin to be developed separately and allowing smooth integration with JAX. This same PJRT implementation also enables initial Intel GPU support for TensorFlow and PyTorch models with XLA acceleration. Refer to [OpenXLA PJRT Plugin RFC](https://github.com/openxla/community/blob/main/rfcs/20230123-pjrt-plugin.md) for more details.
55

66
## Requirements
7-
Please check [README#requirements](../README.md#2-requirements) for the requirements of hardware and software.
7+
Please check the [Requirements section](../README.md#2-requirements) for the hardware and software requirements.
88

99
## Install
1010
The following table tracks intel-extension-for-openxla versions and compatible versions of `jax` and `jaxlib`. The compatibility between `jax` and `jaxlib` is maintained through JAX. This version restriction will be relaxed over time as the plugin API matures.
1111
|**intel-extension-for-openxla**|**jaxlib**|**jax**|
1212
|:-:|:-:|:-:|
13+
| 0.6.0 | 0.4.38 | 0.4.38 |
1314
| 0.5.0 | 0.4.30 | >= 0.4.30, <= 0.4.31|
1415
| 0.4.0 | 0.4.26 | >= 0.4.26, <= 0.4.27|
1516
| 0.3.0 | 0.4.24 | >= 0.4.24, <= 0.4.27|
@@ -35,7 +36,7 @@ python -c "import jax; print(jax.devices())"
3536
```
3637
Reference result:
3738
```
38-
[xpu(id=0), xpu(id=1)]
39+
[sycl(id=0), sycl(id=1)]
3940
```
4041

4142
## Example - Run Stable Diffusion Inference
@@ -53,8 +54,8 @@ pip install transformers==4.47 diffusers==0.31.0 datasets==4.9.7 msgpack==1.1.0
5354
```
5455
Source OneAPI env
5556
```
56-
source /opt/intel/oneapi/compiler/2025.0/env/vars.sh
57-
source /opt/intel/oneapi/mkl/2025.0/env/vars.sh
57+
source /opt/intel/oneapi/compiler/2025.1/env/vars.sh
58+
source /opt/intel/oneapi/mkl/2025.1/env/vars.sh
5859
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/intel/oneapi/umf/latest/lib
5960
```
6061
**NOTE**: The path of OneAPI env script is based on the OneAPI installed path.
@@ -89,5 +90,5 @@ To submit questions, feature requests, and bug reports about the intel-extension
8990
2. If there is an error 'version GLIBCXX_3.4.30' not found, upgrade libstdc++ to the latest, for example for conda
9091

9192
```bash
92-
conda install libstdcxx-ng==12.2.0 -c conda-forge
93+
conda install libstdcxx-ng -c conda-forge -y
9394
```

example/bert/requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
datasets==2.20.0
22
optax>=0.0.8
3-
transformers==4.48
3+
transformers>=4.48.0
44
evaluate>=0.4.1

example/gemma/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ pip install -U tensorflow-text
9393

9494
### Examples
9595

96-
#### Parameters of Single GPU finetune.
96+
#### Parameters for Single GPU finetune.
9797
```
9898
python finetune.py \
9999
--model gemma_7b \
@@ -109,7 +109,7 @@ python finetune.py \
109109
--lora_rank 4
110110
```
111111

112-
#### Parameters of Multiple GPUs data parallel distributed finetune
112+
#### Parameters for Multiple GPUs data parallel distributed finetune
113113

114114
```
115115
python finetune.py \

example/gptj/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Script jax_gptj.py for [EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/g
77
Mark `intel-extension-for-openxla` folder as \<WORKSPACE\>, then
88
```bash
99
cd <WORKSPACE>/example/gptj/
10-
pip install transformers==4.38 datasets==2.20.0
10+
pip install transformers==4.48 datasets==2.20.0
1111
pip install -r ../../test/requirements.txt
1212
```
1313

example/stable_diffusion/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ please got the [main page](https://github.com/intel/intel-extension-for-openxla/
1313
Mark `intel-extension-for-openxla` folder as \<WORKSPACE\>, then
1414
```bash
1515
cd <WORKSPACE>/example/stable_diffusion/
16-
pip install transformers==4.38 diffusers==0.26.3 datasets==2.20.0 msgpack==1.0.7
16+
pip install transformers==4.48 diffusers==0.26.3 datasets==2.20.0 msgpack==1.0.7
1717
pip install -r ../../test/requirements.txt
1818
```
1919

example/t5/install_xpu.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ pip uninstall tensorflow-metadata numba cudf -y
1010
pip uninstall tensorflow -y
1111
pip install tensorflow==2.18.0
1212

13-
conda install libstdcxx-ng==12.2.0 -c conda-forge -y
13+
conda install libstdcxx-ng -c conda-forge -y
1414

1515
pip uninstall mdit-py-plugins jupytext -y
1616
pip install t5

0 commit comments

Comments
 (0)