Skip to content

Commit 721b6c4

Browse files
authored
[docs] Update Native Kimi-K2-Thinking documentation and kt-kernel parameters (#1671)
1 parent 47da806 commit 721b6c4

File tree

6 files changed

+292
-238
lines changed

6 files changed

+292
-238
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ KTransformers is a research project focused on efficient inference and fine-tuni
1717

1818
## 🔥 Updates
1919

20+
* **Dec 5, 2025**: Support Native Kimi-K2-Thinking inference ([Tutorial](./doc/en/Kimi-K2-Thinking-Native.md))
2021
* **Nov 6, 2025**: Support Kimi-K2-Thinking inference ([Tutorial](./doc/en/Kimi-K2-Thinking.md)) and fine-tune ([Tutorial](./doc/en/SFT_Installation_Guide_KimiK2.md))
2122
* **Nov 4, 2025**: KTransformers Fine-Tuning × LLaMA-Factory Integration. ([Tutorial](./doc/en/KTransformers-Fine-Tuning_User-Guide.md))
2223
* **Oct 27, 2025**: Support Ascend NPU. ([Tutorial](./doc/zh/DeepseekR1_V3_tutorial_zh_for_Ascend_NPU.md))

README_ZH.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@ KTransformers 是一个专注于通过 CPU-GPU 异构计算实现大语言模型
1717

1818
## 🔥 更新
1919

20+
* **2025 年 12 月 5 日**:支持原生 Kimi-K2-Thinking 推理([教程](./doc/en/Kimi-K2-Thinking-Native.md)
2021
* **2025 年 11 月 6 日**:支持 Kimi-K2-Thinking 推理([教程](./doc/en/Kimi-K2-Thinking.md))和微调([教程](./doc/en/SFT_Installation_Guide_KimiK2.md)
2122
* **2025 年 11 月 4 日**:KTransformers 微调 × LLaMA-Factory 集成([教程](./doc/en/KTransformers-Fine-Tuning_User-Guide.md)
2223
* **2025 年 10 月 27 日**:支持昇腾 NPU([教程](./doc/zh/DeepseekR1_V3_tutorial_zh_for_Ascend_NPU.md)

doc/en/Kimi-K2-Thinking-Native.md

Lines changed: 216 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,216 @@
1-
需要先写如何安装运行,然后写一个性能,然后链接到如何使用 claude code 接入的文档。
1+
# Running Kimi-K2-Thinking with SGLang and KT-Kernel
2+
3+
This tutorial demonstrates how to run Kimi-K2 model inference using SGLang integrated with KT-Kernel for CPU-GPU heterogeneous inference. This setup enables efficient deployment of large MoE models by offloading experts to CPU.
4+
5+
## Table of Contents
6+
7+
- [Hardware Requirements](#hardware-requirements)
8+
- [Prerequisites](#prerequisites)
9+
- [Step 1: Download Model Weights](#step-1-download-model-weights)
10+
- [Step 2: Launch SGLang Server](#step-2-launch-sglang-server)
11+
- [Step 3: Send Inference Requests](#step-3-send-inference-requests)
12+
13+
## Hardware Requirements
14+
15+
**Minimum Configuration:**
16+
- **GPU**: NVIDIA RTX 4090 48GB (or equivalent with at least 48GB VRAM available)
17+
- **RAM**: At least 650GB system memory
18+
- **Storage**: ~600GB for model weights (native INT4 weight, same weight dir for CPU and GPU)
19+
20+
**Tested Configuration:**
21+
22+
- **GPU**: 1/2/4/8x NVIDIA RTX 4090/L20 48GB
23+
- **CPU**: 2x Intel(R) Xeon(R) Platinum 8488C
24+
- **RAM**: 2TB DDR5 4800MHz
25+
- **OS**: Linux (Ubuntu 20.04+ recommended)
26+
27+
## Prerequisites
28+
29+
Before starting, ensure you have:
30+
31+
1. **KT-Kernel installed** - Follow the [installation guide](./kt-kernel_intro.md#installation)
32+
2. **SGLang installed** - Follow [SGLang integration steps](./kt-kernel_intro.md#integration-with-sglang)
33+
34+
Note: Currently, please clone our custom SGLang repository:
35+
36+
```
37+
git clone https://github.com/kvcache-ai/sglang.git
38+
git checkout kimi_k2
39+
cd sglang && pip install -e "python[all]"
40+
```
41+
42+
43+
44+
1. **CUDA toolkit** - Compatible with your GPU (CUDA 11.8+ recommended)
45+
2. **Hugging Face CLI** - For downloading models:
46+
```bash
47+
pip install huggingface-hub
48+
```
49+
50+
## Step 1: Download Model Weights
51+
52+
```bash
53+
# Create a directory for models
54+
mkdir -p /path/to/models
55+
cd /path/to/models
56+
57+
# Download Kimi-K2-Thinking (INT4 for both CPU and GPU)
58+
huggingface-cli download moonshotai/Kimi-K2-Thinking \
59+
--local-dir /path/to/kimi-k2-thinking
60+
```
61+
62+
**Note:** Replace `/path/to/models` with your actual storage path throughout this tutorial.
63+
64+
## Step 2: Launch SGLang Server
65+
66+
Start the SGLang server with KT-Kernel integration for CPU-GPU heterogeneous inference.
67+
68+
69+
### Launch Command (2x RTX 4090 Example)
70+
71+
```bash
72+
python -m sglang.launch_server \
73+
--host 0.0.0.0 \
74+
--port 30001 \
75+
--model /path/to/kimi-k2-thinking \
76+
--kt-weight-path /path/to/kimi-k2-thinking \
77+
--kt-cpuinfer 96 \
78+
--kt-threadpool-count 2 \
79+
--kt-num-gpu-experts 8 \
80+
--kt-method RAWINT4 \
81+
--kt-gpu-prefill-token-threshold 400 \
82+
--kt-max-deferred-experts-per-token 1 \
83+
--trust-remote-code \
84+
--mem-fraction-static 0.94 \
85+
--served-model-name Kimi-K2-Thinking \
86+
--enable-mixed-chunk \
87+
--tensor-parallel-size 2 \
88+
--enable-p2p-check \
89+
--disable-shared-experts-fusion \
90+
--chunked-prefill-size 65536 \
91+
--max-total-tokens 65536 \
92+
--attention-backend flashinfer
93+
```
94+
95+
It takes about 2~3 minutes to start the server.
96+
97+
See [KT-Kernel Parameters](https://github.com/kvcache-ai/ktransformers/tree/main/kt-kernel#kt-kernel-parameters) for detailed parameter tuning guidelines.
98+
99+
### Key Parameters
100+
101+
| Parameter | Description |
102+
|-----------|-------------|
103+
| `--kt-method RAWINT4` | CPU and GPU use the same INT4 weight. Set `--model` and `--kt-weight-path` to the same directory. |
104+
| `--kt-num-gpu-experts` | Number of experts kept on GPU for decoding. |
105+
| `--kt-gpu-prefill-token-threshold` | Token count threshold for prefill strategy. Below: hybrid CPU+GPU. Above: layerwise GPU prefill. |
106+
| `--chunked-prefill-size` | Maximum tokens per prefill batch. |
107+
| `--max-total-tokens` | Maximum total tokens in KV cache. |
108+
109+
### About `--kt-gpu-prefill-token-threshold`
110+
111+
This parameter controls the prefill strategy:
112+
113+
- **$\leq$ threshold**: Uses hybrid CPU+GPU prefill. No extra VRAM needed, but performance degrades slowly as token count increases.
114+
- **> threshold**: Uses layerwise GPU prefill. Performance scales exponentially up to `chunked-prefill-size`, but requires 9GB+ extra VRAM.
115+
116+
### Troubleshooting OOM
117+
118+
Layerwise prefill requires extra VRAM (~9GB + incremental cost with prefill length). If you encounter OOM, adjust these parameters based on your use case and hardware (refer to the recommended parameters table below):
119+
120+
| Parameter | VRAM Impact |
121+
|-----------|-------------|
122+
| `--kt-num-gpu-experts` | Reduces expert weight VRAM usage |
123+
| `--chunked-prefill-size` | Reduces prefill extra VRAM allocation |
124+
| `--max-total-tokens` | Reduces KV cache VRAM usage |
125+
126+
**Tip:** Test with an input of length `chunked-prefill-size` to verify your configuration won't OOM during prefill.
127+
128+
129+
### Recommended Parameters
130+
131+
| GPU Config | `kt-num-gpu-experts` | `max-total-tokens` | `chunked-prefill-size` |
132+
|------------|----------------------|---------------------|------------------------|
133+
| 1x RTX 4090 (48GB) | 1 | 32768 | 32768 |
134+
| 2x RTX 4090 (48GB) | 8 | 65536 | 65536 |
135+
| 4x RTX 4090 (48GB) | 30 | 80000 | 65536 |
136+
| 8x RTX 4090 (48GB) | 80 | 100000 | 65536 |
137+
138+
### Performance
139+
140+
The following performance benchmarks were measured with single concurrency at maximum prefill length (32768 tokens):
141+
142+
| GPU Config | Prefill Throughput |
143+
|------------|-------------------|
144+
| 1x RTX 4090 (48GB) | 290 tokens/s |
145+
| 2x RTX 4090 (48GB) | 529 tokens/s |
146+
| 4x RTX 4090 (48GB) | 775 tokens/s |
147+
| 8x RTX 4090 (48GB) | 1060 tokens/s |
148+
149+
## Step 3: Send Inference Requests
150+
151+
Once the server is running, you can send inference requests using the OpenAI-compatible API.
152+
153+
### Basic Chat Completion Request
154+
155+
```bash
156+
curl -s http://localhost:30001/v1/chat/completions \
157+
-H "Content-Type: application/json" \
158+
-d '{
159+
"model": "Kimi-K2-Thinking",
160+
"stream": false,
161+
"messages": [
162+
{"role": "user", "content": "hi"}
163+
]
164+
}'
165+
```
166+
167+
### Example Response
168+
169+
```json
170+
{
171+
"id": "cd0905562bf44513947284f80cc5634b",
172+
"object": "chat.completion",
173+
"created": 1764921457,
174+
"model": "Kimi-K2-Thinking",
175+
"choices": [
176+
{
177+
"index": 0,
178+
"message": {
179+
"role": "assistant",
180+
"content": " <think> The user says \"hi\". This is a very simple greeting. I should respond in a friendly and helpful manner. Since I'm an AI assistant, I should be professional but approachable.\n\nPossible responses:\n1. \"Hello! How can I help you today?\"\n2. \"Hi there! What can I do for you?\"\n3. \"Hello! It's nice to hear from you. What would you like to talk about?\"\n4. \"Hi! I'm here to assist you with any questions you might have.\"\n\nI think option 1 is the most standard and professional. It's direct, friendly, and opens the door for the user to ask their question. I should keep it concise.\n\nLet me go with: \"Hello! How can I help you today?\" </think> Hello! How can I help you today?",
181+
"reasoning_content": null,
182+
"tool_calls": null
183+
},
184+
"logprobs": null,
185+
"finish_reason": "stop",
186+
"matched_stop": 163586
187+
}
188+
],
189+
"usage": {
190+
"prompt_tokens": 26,
191+
"total_tokens": 189,
192+
"completion_tokens": 163,
193+
"prompt_tokens_details": null,
194+
"reasoning_tokens": 0
195+
},
196+
"metadata": {
197+
"weight_version": "default"
198+
}
199+
}
200+
```
201+
202+
## Advance Use Case: Running Claude Code with Native Kimi-K2-Thinking Local Backend
203+
204+
Add the following parameters to the SGLang launch command above to enable tool calling support:
205+
206+
```bash
207+
--tool-call-parser kimi_k2 --reasoning-parser kimi_k2
208+
```
209+
210+
With these parameters enabled, you can use [claude-code-router](https://github.com/musistudio/claude-code-router) to connect Kimi-K2-Thinking as a local backend for [Claude Code](https://github.com/anthropics/claude-code).
211+
212+
## Additional Resources
213+
214+
- [KT-Kernel Documentation](../../../kt-kernel/README.md)
215+
- [SGLang GitHub](https://github.com/sgl-project/sglang)
216+
- [Claude Code Router](https://github.com/musistudio/claude-code-router) - Route Claude Code to custom backends

0 commit comments

Comments
 (0)