Skip to content

Commit c392d59

Browse files
Doc update for r2.6 (#3453)
* doc update for r2.6 * source build DeepSpeed with the specified commit * empty DS version and add note to reminder users about env_setup.sh param change * deprecate serving examples * Update deepspeed UT for deepspeed upgrade --------- Co-authored-by: Xia, Weiwen <[email protected]>
1 parent d5f20d0 commit c392d59

File tree

18 files changed

+116
-75
lines changed

18 files changed

+116
-75
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Intel® Extension for PyTorch\*
55

66
</div>
77

8-
**CPU** [💻main branch](https://github.com/intel/intel-extension-for-pytorch/tree/main)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🌱Quick Start](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/getting_started.html)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖Documentations](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🏃Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=cpu&version=v2.5.0%2Bcpu)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[💻LLM Example](https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/llm) <br>
8+
**CPU** [💻main branch](https://github.com/intel/intel-extension-for-pytorch/tree/main)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🌱Quick Start](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/getting_started.html)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖Documentations](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🏃Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=cpu&version=v2.6.0%2Bcpu)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[💻LLM Example](https://github.com/intel/intel-extension-for-pytorch/tree/release/2.6/examples/cpu/llm) <br>
99
**GPU** [💻main branch](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🌱Quick Start](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/getting_started.html)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[📖Documentations](https://intel.github.io/intel-extension-for-pytorch/xpu/latest/)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[🏃Installation](https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu)&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp;[💻LLM Example](https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main/examples/gpu/llm)<br>
1010

1111
Intel® Extension for PyTorch\* extends PyTorch\* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X<sup>e</sup> Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* xpu device.

dependency_version.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,8 @@
2020
"version": "2.6.0+cpu"
2121
},
2222
"deepspeed": {
23-
"version": "0.16.0",
24-
"commit": "v0.16.0"
23+
"version": "",
24+
"commit": "018ece5af2d89a11a4a235f81f94496c78b4f990"
2525
},
2626
"oneCCL": {
2727
"commit": "2021.12"

docker/Dockerfile.prebuilt

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -35,11 +35,11 @@ RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 100
3535

3636
WORKDIR /root
3737

38-
ARG IPEX_VERSION=2.5.0
39-
ARG TORCHCCL_VERSION=2.5.0
40-
ARG PYTORCH_VERSION=2.5.0
41-
ARG TORCHAUDIO_VERSION=2.5.0
42-
ARG TORCHVISION_VERSION=0.20.0
38+
ARG IPEX_VERSION=2.6.0
39+
ARG TORCHCCL_VERSION=2.6.0
40+
ARG PYTORCH_VERSION=2.6.0
41+
ARG TORCHAUDIO_VERSION=2.6.0
42+
ARG TORCHVISION_VERSION=0.21.0
4343
RUN . ./venv/bin/activate && \
4444
python -m pip --no-cache-dir install --upgrade \
4545
pip \

docker/README.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,24 +10,25 @@
1010

1111
```console
1212
$ cd $DOCKERFILE_DIR
13-
$ DOCKER_BUILDKIT=1 docker build -f Dockerfile.prebuilt -t intel-extension-for-pytorch:main .
13+
$ DOCKER_BUILDKIT=1 docker build -f Dockerfile.prebuilt -t intel-extension-for-pytorch:2.6.0 .
1414
```
1515

1616
Run the following commands to build a `conda` based container with Intel® Extension for PyTorch\* compiled from source:
1717

1818
```console
1919
$ git clone https://github.com/intel/intel-extension-for-pytorch.git
2020
$ cd intel-extension-for-pytorch
21+
$ git checkout v2.6.0+cpu
2122
$ git submodule sync
2223
$ git submodule update --init --recursive
23-
$ DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile.compile -t intel-extension-for-pytorch:main .
24+
$ DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile.compile -t intel-extension-for-pytorch:2.6.0 .
2425
```
2526

2627
* Sanity Test
2728

2829
When a docker image is built out, Run the command below to launch into a container:
2930
```console
30-
$ docker run --rm -it intel-extension-for-pytorch:main bash
31+
$ docker run --rm -it intel-extension-for-pytorch:2.6.0 bash
3132
```
3233

3334
Then run the command below inside the container to verify correct installation.

docs/tutorials/getting_started.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Quick Start
22

3-
The following instructions assume you have installed the Intel® Extension for PyTorch\*. For installation instructions, refer to [Installation](../../../index.html#installation?platform=cpu&version=main).
3+
The following instructions assume you have installed the Intel® Extension for PyTorch\*. For installation instructions, refer to [Installation](../../../index.html#installation?platform=cpu&version=v2.6.0%2Bcpu).
44

55
To start using the Intel® Extension for PyTorch\* in your code, you need to make the following changes:
66

@@ -157,4 +157,4 @@ with torch.inference_mode(), torch.cpu.amp.autocast(enabled=amp_enabled):
157157
print(gen_text, total_new_tokens, flush=True)
158158
```
159159

160-
More LLM examples, including usage of low precision data types are available in the [LLM Examples](https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/llm) section.
160+
More LLM examples, including usage of low precision data types are available in the [LLM Examples](https://github.com/intel/intel-extension-for-pytorch/tree/release/2.6/examples/cpu/llm) section.

docs/tutorials/installation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
Installation
22
============
33

4-
Select your preferences and follow the installation instructions provided on the [Installation page](../../../index.html#installation?platform=cpu&version=v2.5.0%2Bcpu).
4+
Select your preferences and follow the installation instructions provided on the [Installation page](../../../index.html#installation?platform=cpu&version=v2.6.0%2Bcpu).
55

66
After successful installation, refer to the [Quick Start](getting_started.md) and [Examples](examples.md) sections to start using the extension in your code.
77

8-
**NOTE:** For detailed instructions on installing and setting up the environment for Large Language Models (LLM), as well as example scripts, refer to the [LLM best practices](https://github.com/intel/intel-extension-for-pytorch/tree/v2.5.0%2Bcpu/examples/cpu/llm).
8+
**NOTE:** For detailed instructions on installing and setting up the environment for Large Language Models (LLM), as well as example scripts, refer to the [LLM best practices](https://github.com/intel/intel-extension-for-pytorch/tree/v2.6.0%2Bcpu/examples/cpu/llm).

docs/tutorials/introduction.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ the `Large Language Models (LLM) <llm.html>`_ section.
1616

1717
Get Started
1818
-----------
19-
- `Installation <../../../index.html#installation?platform=cpu&version=v2.5.0%2Bcpu>`_
19+
- `Installation <../../../index.html#installation?platform=cpu&version=v2.6.0%2Bcpu>`_
2020
- `Quick Start <getting_started.md>`_
2121
- `Examples <examples.md>`_
2222

docs/tutorials/llm.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Verified for distributed inference mode via DeepSpeed
3030

3131
*Note*: The above verified models (including other models in the same model family, like "codellama/CodeLlama-7b-hf" from LLAMA family) are well supported with all optimizations like indirect access KV cache, fused ROPE, and customized linear kernels. We are working in progress to better support the models in the tables with various data types. In addition, more models will be optimized in the future.
3232

33-
Please check `LLM best known practice <https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/llm>`_ for instructions to install/setup environment and example scripts.
33+
Please check `LLM best known practice <https://github.com/intel/intel-extension-for-pytorch/tree/release/2.6/examples/cpu/llm>`_ for instructions to install/setup environment and example scripts.
3434

3535
Module Level Optimization API for customized LLM (Prototype)
3636
------------------------------------------------------------

examples/cpu/inference/cpp/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,15 +16,15 @@ We can have `libtorch` and `libintel-ext-pt` installed via the following command
1616
Download zip file of `libtorch` and decompress it:
1717

1818
```bash
19-
wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.5.0%2Bcpu.zip
20-
unzip libtorch-cxx11-abi-shared-with-deps-2.5.0+cpu.zip
19+
wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.6.0%2Bcpu.zip
20+
unzip libtorch-cxx11-abi-shared-with-deps-2.6.0+cpu.zip
2121
```
2222

2323
Download and execute `libintel-ext-pt` installation script:
2424

2525
```bash
26-
wget https://intel-extension-for-pytorch.s3.amazonaws.com/libipex/cpu/libintel-ext-pt-cxx11-abi-2.5.0%2Bcpu.run
27-
bash libintel-ext-pt-cxx11-abi-2.5.0+cpu.run install ./libtorch
26+
wget https://intel-extension-for-pytorch.s3.amazonaws.com/libipex/cpu/libintel-ext-pt-cxx11-abi-2.6.0%2Bcpu.run
27+
bash libintel-ext-pt-cxx11-abi-2.6.0+cpu.run install ./libtorch
2828
```
2929

3030
*Note:* If your C++ project has pre-C\+\+11 library dependencies,
@@ -59,4 +59,4 @@ Run the executable file:
5959
./example-app ../resnet50.pt
6060
```
6161

62-
Please view the [c++ example in Intel® Extension for PyTorch\* online document](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/examples.html#c) for more information.
62+
Please view the [C++ example in Intel® Extension for PyTorch\* online document](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/examples.html#c) for more information.

examples/cpu/llm/Dockerfile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ COPY . ./intel-extension-for-pytorch
4343
RUN . ~/miniforge3/bin/activate && conda create -y -n compile_py310 python=3.10 && conda activate compile_py310 && \
4444
cd intel-extension-for-pytorch/examples/cpu/llm && \
4545
export CC=gcc && export CXX=g++ && \
46-
if [ -z ${COMPILE} ]; then bash tools/env_setup.sh 6; else bash tools/env_setup.sh 2; fi && \
46+
if [ -z ${COMPILE} ]; then bash tools/env_setup.sh 14; else bash tools/env_setup.sh 10; fi && \
4747
unset CC && unset CXX
4848

4949
FROM base AS deploy
@@ -60,7 +60,7 @@ COPY --from=dev /root/intel-extension-for-pytorch/tools/get_libstdcpp_lib.sh ./l
6060
RUN . ~/miniforge3/bin/activate && conda create -y -n py310 python=3.10 && conda activate py310 && \
6161
cd /usr/lib/x86_64-linux-gnu/ && ln -s libtcmalloc.so.4 libtcmalloc.so && cd && \
6262
cd ./llm && \
63-
bash tools/env_setup.sh 1 && \
63+
bash tools/env_setup.sh 9 && \
6464
python -m pip cache purge && \
6565
mv ./oneCCL_release /opt/oneCCL && \
6666
chown -R root:root /opt/oneCCL && \

0 commit comments

Comments
 (0)