This tutorial demonstrates how to run GLM-5 model inference using SGLang integrated with KT-Kernel for CPU-GPU heterogeneous inference. This setup enables efficient deployment of large MoE models by offloading experts to CPU. KT-Kernel supports both BF16 and FP8 precision backends, allowing you to choose between maximum quality and reduced memory footprint.
- Table of Contents
- Prerequisites
- Step 1: Download Model Weights
- Step 2: Launch SGLang Server
- Step 3: Send Inference Requests
- Additional Resources
Before starting, ensure you have:
-
SGLang installed
Install the kvcache-ai fork of SGLang (one of):
# Option A: One-click install (from ktransformers root) ./install.sh # Option B: pip install pip install sglang-kt
-
KT-Kernel installed
git clone https://github.com/kvcache-ai/ktransformers.git git submodule update --init --recursive cd kt-kernel && ./install.sh
-
transformers reinstalled
pip install git+https://github.com/huggingface/transformers.git
-
CUDA toolkit - CUDA 12.0+ recommended (12.8+ for best FP8 support)
-
Hugging Face CLI - For downloading models:
pip install -U huggingface-hub
Download the GLM-5 weights from Hugging Face.
# FP8
hf download zai-org/GLM-5-FP8 \
--local-dir /path/to/GLM-5-FP8
# BF16
hf download zai-org/GLM-5 \
--local-dir /path/to/GLM-5Note: Replace /path/to/ with your actual storage path throughout this tutorial.
Start the SGLang server with KT-Kernel integration for CPU-GPU heterogeneous inference.
# FP8 Precision
export PYTORCH_ALLOC_CONF=expandable_segments:True
export SGLANG_ENABLE_JIT_DEEPGEMM=0
python -m sglang.launch_server \
--host 0.0.0.0 \
--port 30000 \
--model /path/to/GLM-5-FP8 \
--kt-weight-path /path/to/GLM-5-FP8 \
--kt-cpuinfer 96 \
--kt-threadpool-count 2 \
--kt-num-gpu-experts 30 \
--kt-method FP8 \
--kt-gpu-prefill-token-threshold 1024 \
--kt-enable-dynamic-expert-update \
--kt-expert-placement-strategy uniform \
--trust-remote-code \
--mem-fraction-static 0.75 \
--served-model-name GLM5 \
--enable-mixed-chunk \
--tensor-parallel-size 8 \
--enable-p2p-check \
--disable-shared-experts-fusion \
--chunked-prefill-size 16384 \
--max-running-requests 4 \
--max-total-tokens 128000 \
--attention-backend flashinfer \
--fp8-gemm-backend cutlass \
--kv-cache-dtype bf16 \
--tool-call-parser glm47 \
--reasoning-parser glm45 \
--watchdog-timeout 3000
# BF16 Precision
export PYTORCH_ALLOC_CONF=expandable_segments:True
export SGLANG_ENABLE_JIT_DEEPGEMM=0
python -m sglang.launch_server \
--host 0.0.0.0 \
--port 30000 \
--model /path/to/GLM-5 \
--kt-weight-path /path/to/GLM-5 \
--kt-cpuinfer 96 \
--kt-threadpool-count 2 \
--kt-num-gpu-experts 10 \
--kt-method BF16 \
--kt-gpu-prefill-token-threshold 1024 \
--kt-enable-dynamic-expert-update \
--kt-expert-placement-strategy uniform \
--trust-remote-code \
--mem-fraction-static 0.75 \
--served-model-name GLM5 \
--enable-mixed-chunk \
--tensor-parallel-size 8 \
--enable-p2p-check \
--disable-shared-experts-fusion \
--chunked-prefill-size 16384 \
--max-running-requests 4 \
--max-total-tokens 128000 \
--attention-backend flashinfer \
--tool-call-parser glm47 \
--reasoning-parser glm45 \
--watchdog-timeout 3000Layerwise prefill requires one extra MoE layer's worth of VRAM.
If you encounter OOM, adjust --kt-num-gpu-experts, --chunked-prefill-size, --mem-fraction-static and --max-total-tokens when launching the server.
If you encounter other issues, try kt doctor to diagnose your setup.
See KT-Kernel Parameters for detailed parameter tuning guidelines.
Once the server is running (default: http://localhost:30000), you can interact with the model in several ways:
The easiest way to chat with the model:
kt chatThis opens an interactive terminal chat session. Type your messages and press Enter to send. Use Ctrl+C to exit.
The server exposes an OpenAI-compatible API at http://localhost:30000/v1.
curl example (streaming):
curl http://localhost:30000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "GLM5",
"messages": [{"role": "user", "content": "hi, who are you?"}],
"stream": true
}'