Skip to content

Commit 526b711

Browse files
xin3heCopilot
andauthored
Update docs/step_by_step_CN.md
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
1 parent 24f5e8f commit 526b711

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

docs/step_by_step_CN.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -676,10 +676,10 @@ print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50, do_sample=Fal
676676
#### 各推理后端的详细支持说明
677677
| 后端名称 | 支持设备 | 支持位宽 | 支持数据类型 | 优先级 | 打包格式 | 依赖库要求 |
678678
|-------------------------|----------------|----------------|--------------|--------|-----------------|--------------------------------|
679-
| ark | cpu | 248 | FP32/FP16/BF16 | 6 | gptq/gptq_zp+-1 | auto-round-lib |
680-
| ark | cpu | 4 | FP32/FP16/BF16 | 6 | awq | auto-round-lib |
681-
| ark | xpu | 48 | FP32/FP16/BF16 | 6 | gptq/gptq_zp+-1 | auto-round-lib |
682-
| ark | xpu | 4 | FP32/FP16/BF16 | 6 | awq | auto-round-lib |
679+
| ark | cpu | 248 | FP32/FP16/BF16 | 6 | gptq/gptq_zp+-1 | auto-round-lib<br/>torch>=2.8.0 |
680+
| ark | cpu | 4 | FP32/FP16/BF16 | 6 | awq | auto-round-lib<br/>torch>=2.8.0 |
681+
| ark | xpu | 48 | FP32/FP16/BF16 | 6 | gptq/gptq_zp+-1 | auto-round-lib<br/>torch>=2.8.0 |
682+
| ark | xpu | 4 | FP32/FP16/BF16 | 6 | awq | auto-round-lib<br/>torch>=2.8.0 |
683683
| ipex(即将废弃) | cpu/xpu | 4 | BF16/FP16 | 5 | gptq_zp+-1/awq | intel-extension-for-pytorch |
684684
| marlin | cuda | 48 | BF16/FP16 | 6 | gptq/gptq_zp+-1 | gptqmodel |
685685
| exllamav2/<br/>gptqmodel:exllamav2 | cuda | 4 | BF16/FP16 | 5 | gptq | gptqmodel |

0 commit comments

Comments
 (0)