Skip to content

Commit 4d90735

Browse files
update IPEX version in doc for 2.8 release (#5730)
Signed-off-by: Neo Zhang Jianyu <[email protected]>
1 parent dd525e8 commit 4d90735

File tree

4 files changed

+9
-9
lines changed

4 files changed

+9
-9
lines changed

docs/tutorials/llm.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ LLM fine-tuning on Intel® Data Center Max 1550 GPU
136136
- ✅
137137

138138

139-
Check `LLM best known practice <https://github.com/intel/intel-extension-for-pytorch/tree/release/xpu/2.7.10/examples/gpu/llm>`_ for instructions to install/setup environment and example scripts..
139+
Check `LLM best known practice <https://github.com/intel/intel-extension-for-pytorch/tree/release/xpu/2.8.10/examples/gpu/llm>`_ for instructions to install/setup environment and example scripts..
140140

141141
Optimization Methodologies
142142
--------------------------

docs/tutorials/llm/int4_weight_only_quantization.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -129,9 +129,9 @@ Intel® Extension for PyTorch\* implements Weight-Only Quantization for Intel®
129129

130130
### Environment Setup
131131

132-
Please refer to the [env setup](https://github.com/intel/intel-extension-for-pytorch/blob/v2.7.10%2Bxpu/examples/gpu/llm/inference/README.md).
132+
Please refer to the [env setup](https://github.com/intel/intel-extension-for-pytorch/blob/v2.8.10%2Bxpu/examples/gpu/llm/inference/README.md).
133133

134-
Example can be found at [Learn WOQ](https://github.com/intel/intel-extension-for-pytorch/tree/v2.7.10%2Bxpu/examples/gpu/llm/inference#learn-to-quantize-llm-and-run-inference).
134+
Example can be found at [Learn WOQ](https://github.com/intel/intel-extension-for-pytorch/tree/v2.8.10%2Bxpu/examples/gpu/llm/inference#learn-to-quantize-llm-and-run-inference).
135135

136136
### Run Weight-Only Quantization LLM on Intel® GPU
137137

@@ -182,7 +182,7 @@ output = loaded_model.generate(inputs)
182182
```
183183

184184

185-
#### Execute [WOQ benchmark script](https://github.com/intel/intel-extension-for-pytorch/blob/v2.7.10%2Bxpu/examples/gpu/llm/inference/run_benchmark_woq.sh)
185+
#### Execute [WOQ benchmark script](https://github.com/intel/intel-extension-for-pytorch/blob/v2.8.10%2Bxpu/examples/gpu/llm/inference/run_benchmark_woq.sh)
186186

187187
```python
188188
bash run_benchmark_woq.sh

docs/tutorials/llm/llm_optimize_transformers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ API documentation is available at [API Docs page](../api_doc.html#ipex.llm.optim
99

1010
## Pseudocode of Common Usage Scenarios
1111

12-
The following sections show pseudocode snippets to invoke Intel® Extension for PyTorch\* APIs to work with LLMs. Complete examples can be found at [the Example directory](https://github.com/intel/intel-extension-for-pytorch/tree/v2.7.10%2Bxpu/examples/gpu/llm/inference).
12+
The following sections show pseudocode snippets to invoke Intel® Extension for PyTorch\* APIs to work with LLMs. Complete examples can be found at [the Example directory](https://github.com/intel/intel-extension-for-pytorch/tree/v2.8.10%2Bxpu/examples/gpu/llm/inference).
1313

1414
### FP16
1515

examples/gpu/llm/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
Here you can find examples for large language models (LLM) text generation. These scripts:
44

55
> [!NOTE]
6-
> New Llama models like Llama3.2-1B, Llama3.2-3B and Llama3.3-7B are also supported from release v2.7.10+xpu.
6+
> New Llama models like Llama3.2-1B, Llama3.2-3B and Llama3.3-7B are also supported from release v2.8.10+xpu.
77
88
- Include both inference/finetuning(lora)/bitsandbytes(qlora-finetuning).
99
- Include both single instance and distributed (DeepSpeed) use cases for FP16 optimization.
@@ -18,7 +18,7 @@ Here you can find examples for large language models (LLM) text generation. Thes
1818
# Get the Intel® Extension for PyTorch* source code
1919
git clone https://github.com/intel/intel-extension-for-pytorch.git
2020
cd intel-extension-for-pytorch
21-
git checkout release/xpu/2.7.10
21+
git checkout release/xpu/2.8.10
2222
git submodule sync
2323
git submodule update --init --recursive
2424

@@ -38,14 +38,14 @@ call .\tools\env_activate.bat [inference|fine-tuning|bitsandbytes]
3838
```
3939
### Conda-based environment setup with prebuilt release wheel files
4040

41-
Make sure the driver packages are installed. Refer to [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/#installation?platform=gpu&version=v2.7.10%2Bxpu&os=linux%2Fwsl2&package=pip).
41+
Make sure the driver packages are installed. Refer to [Installation Guide](https://intel.github.io/intel-extension-for-pytorch/#installation?platform=gpu&version=v2.8.10%2Bxpu&os=linux%2Fwsl2&package=pip).
4242

4343
```bash
4444

4545
# Get the Intel® Extension for PyTorch* source code
4646
git clone https://github.com/intel/intel-extension-for-pytorch.git
4747
cd intel-extension-for-pytorch
48-
git checkout release/xpu/2.7.10
48+
git checkout release/xpu/2.8.10
4949
git submodule sync
5050
git submodule update --init --recursive
5151

0 commit comments

Comments
 (0)