Skip to content

Commit 47da806

Browse files
authored
[doc](kt-kernel): add kimi-k2-thinking (#1670)
1 parent 71f683a commit 47da806

File tree

1 file changed

+195
-0
lines changed

1 file changed

+195
-0
lines changed
Lines changed: 195 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,195 @@
1+
# Running Kimi-K2-Thinking with SGLang and KT-Kernel
2+
3+
This tutorial demonstrates how to run Kimi-K2 model inference using SGLang integrated with KT-Kernel for CPU-GPU heterogeneous inference. This setup enables efficient deployment of large MoE models by offloading experts to CPU.
4+
5+
## Table of Contents
6+
7+
- [Hardware Requirements](#hardware-requirements)
8+
- [Prerequisites](#prerequisites)
9+
- [Step 1: Download Model Weights](#step-1-download-model-weights)
10+
- [Step 2: Launch SGLang Server](#step-2-launch-sglang-server)
11+
- [Step 3: Send Inference Requests](#step-3-send-inference-requests)
12+
13+
## Hardware Requirements
14+
15+
**Minimum Configuration:**
16+
- **GPU**: NVIDIA RTX 4090 48GB (or equivalent with at least 48GB VRAM available)
17+
- **RAM**: At least 650GB system memory
18+
- **Storage**: ~600GB for model weights (native INT4 weight, same weight dir for CPU and GPU)
19+
20+
**Tested Configuration:**
21+
22+
- **GPU**: 1/2/4/8x NVIDIA RTX 4090/L20 48GB
23+
- **CPU**: 2x Intel(R) Xeon(R) Platinum 8488C
24+
- **RAM**: 2TB DDR5 4800MHz
25+
- **OS**: Linux (Ubuntu 20.04+ recommended)
26+
27+
## Prerequisites
28+
29+
Before starting, ensure you have:
30+
31+
1. **KT-Kernel installed** - Follow the [installation guide](./kt-kernel_intro.md#installation)
32+
2. **SGLang installed** - Follow [SGLang integration steps](./kt-kernel_intro.md#integration-with-sglang)
33+
34+
Note: Currently, please clone our custom SGLang repository:
35+
36+
```
37+
git clone https://github.com/kvcache-ai/sglang.git
38+
git checkout kimi_k2
39+
cd sglang && pip install -e "python[all]"
40+
```
41+
42+
43+
44+
1. **CUDA toolkit** - Compatible with your GPU (CUDA 11.8+ recommended)
45+
2. **Hugging Face CLI** - For downloading models:
46+
```bash
47+
pip install huggingface-hub
48+
```
49+
50+
## Step 1: Download Model Weights
51+
52+
```bash
53+
# Create a directory for models
54+
mkdir -p /path/to/models
55+
cd /path/to/models
56+
57+
# Download Kimi-K2-Thinking (INT4 for both CPU and GPU)
58+
huggingface-cli download moonshotai/Kimi-K2-Thinking \
59+
--local-dir /path/to/kimi-k2-thinking
60+
```
61+
62+
**Note:** Replace `/path/to/models` with your actual storage path throughout this tutorial.
63+
64+
## Step 2: Launch SGLang Server
65+
66+
Start the SGLang server with KT-Kernel integration for CPU-GPU heterogeneous inference.
67+
68+
69+
### Launch Command (2x RTX 4090 Example)
70+
71+
```bash
72+
python -m sglang.launch_server \
73+
--host 0.0.0.0 \
74+
--port 30001 \
75+
--model /path/to/kimi-k2-thinking \
76+
--kt-weight-path /path/to/kimi-k2-thinking \
77+
--kt-cpuinfer 96 \
78+
--kt-threadpool-count 2 \
79+
--kt-num-gpu-experts 8 \
80+
--kt-method RAWINT4 \
81+
--kt-gpu-prefill-token-threshold 400 \
82+
--kt-max-deferred-experts-per-token 1 \
83+
--trust-remote-code \
84+
--mem-fraction-static 0.94 \
85+
--served-model-name Kimi-K2-Thinking \
86+
--enable-mixed-chunk \
87+
--tensor-parallel-size 2 \
88+
--enable-p2p-check \
89+
--disable-shared-experts-fusion \
90+
--chunked-prefill-size 65536 \
91+
--max-total-tokens 65536 \
92+
--attention-backend flashinfer
93+
```
94+
95+
It takes about 2~3 minutes to start the server.
96+
97+
See [KT-Kernel Parameters](https://github.com/kvcache-ai/ktransformers/tree/main/kt-kernel#kt-kernel-parameters) for detailed parameter tuning guidelines.
98+
99+
### Key Parameters
100+
101+
| Parameter | Description |
102+
|-----------|-------------|
103+
| `--kt-method RAWINT4` | CPU and GPU use the same INT4 weight. Set `--model` and `--kt-weight-path` to the same directory. |
104+
| `--kt-num-gpu-experts` | Number of experts kept on GPU for decoding. |
105+
| `--kt-gpu-prefill-token-threshold` | Token count threshold for prefill strategy. Below: hybrid CPU+GPU. Above: layerwise GPU prefill. |
106+
| `--chunked-prefill-size` | Maximum tokens per prefill batch. |
107+
| `--max-total-tokens` | Maximum total tokens in KV cache. |
108+
109+
### About `--kt-gpu-prefill-token-threshold`
110+
111+
This parameter controls the prefill strategy:
112+
113+
- **$\leq$ threshold**: Uses hybrid CPU+GPU prefill. No extra VRAM needed, but performance degrades slowly as token count increases.
114+
- **> threshold**: Uses layerwise GPU prefill. Performance scales exponentially up to `chunked-prefill-size`, but requires 9GB+ extra VRAM.
115+
116+
### Troubleshooting OOM
117+
118+
Layerwise prefill requires extra VRAM (~9GB + incremental cost with prefill length). If you encounter OOM, adjust these parameters based on your use case and hardware (refer to the recommended parameters table below):
119+
120+
| Parameter | VRAM Impact |
121+
|-----------|-------------|
122+
| `--kt-num-gpu-experts` | Reduces expert weight VRAM usage |
123+
| `--chunked-prefill-size` | Reduces prefill extra VRAM allocation |
124+
| `--max-total-tokens` | Reduces KV cache VRAM usage |
125+
126+
**Tip:** Test with an input of length `chunked-prefill-size` to verify your configuration won't OOM during prefill.
127+
128+
129+
### Recommended Parameters
130+
131+
| GPU Config | `kt-num-gpu-experts` | `max-total-tokens` | `chunked-prefill-size` |
132+
|------------|----------------------|---------------------|------------------------|
133+
| 1x RTX 4090 (48GB) | 1 | 32768 | 32768 |
134+
| 2x RTX 4090 (48GB) | 8 | 65536 | 65536 |
135+
| 4x RTX 4090 (48GB) | 30 | 80000 | 65536 |
136+
| 8x RTX 4090 (48GB) | 80 | 100000 | 65536 |
137+
138+
## Step 3: Send Inference Requests
139+
140+
Once the server is running, you can send inference requests using the OpenAI-compatible API.
141+
142+
### Basic Chat Completion Request
143+
144+
```bash
145+
curl -s http://localhost:30001/v1/chat/completions \
146+
-H "Content-Type: application/json" \
147+
-d '{
148+
"model": "Kimi-K2-Thinking",
149+
"stream": false,
150+
"messages": [
151+
{"role": "user", "content": "hi"}
152+
]
153+
}'
154+
```
155+
156+
### Example Response
157+
158+
```json
159+
{
160+
"id": "cd0905562bf44513947284f80cc5634b",
161+
"object": "chat.completion",
162+
"created": 1764921457,
163+
"model": "Kimi-K2-Thinking",
164+
"choices": [
165+
{
166+
"index": 0,
167+
"message": {
168+
"role": "assistant",
169+
"content": " <think> The user says \"hi\". This is a very simple greeting. I should respond in a friendly and helpful manner. Since I'm an AI assistant, I should be professional but approachable.\n\nPossible responses:\n1. \"Hello! How can I help you today?\"\n2. \"Hi there! What can I do for you?\"\n3. \"Hello! It's nice to hear from you. What would you like to talk about?\"\n4. \"Hi! I'm here to assist you with any questions you might have.\"\n\nI think option 1 is the most standard and professional. It's direct, friendly, and opens the door for the user to ask their question. I should keep it concise.\n\nLet me go with: \"Hello! How can I help you today?\" </think> Hello! How can I help you today?",
170+
"reasoning_content": null,
171+
"tool_calls": null
172+
},
173+
"logprobs": null,
174+
"finish_reason": "stop",
175+
"matched_stop": 163586
176+
}
177+
],
178+
"usage": {
179+
"prompt_tokens": 26,
180+
"total_tokens": 189,
181+
"completion_tokens": 163,
182+
"prompt_tokens_details": null,
183+
"reasoning_tokens": 0
184+
},
185+
"metadata": {
186+
"weight_version": "default"
187+
}
188+
}
189+
```
190+
191+
## Additional Resources
192+
193+
- [Layerwise Prefill Internals](./layerwise-prefill-internals.md) - Technical details on prefill strategies
194+
- [KT-Kernel Documentation](../../../kt-kernel/README.md)
195+
- [SGLang GitHub](https://github.com/sgl-project/sglang)

0 commit comments

Comments
 (0)