Skip to content

Commit 96bc902

Browse files
authored
add autoround meets sglang blog (#246)
* add autoroundxsglang blog Signed-off-by: Zhang, Weiwei1 <[email protected]> * fixtypo Signed-off-by: Zhang, Weiwei1 <[email protected]> * rm Delimiter Signed-off-by: Zhang, Weiwei1 <[email protected]> * refine doc Signed-off-by: Zhang, Weiwei1 <[email protected]> * fixtypos Signed-off-by: Zhang, Weiwei1 <[email protected]> * refine Signed-off-by: Zhang, Weiwei1 <[email protected]> * add preview image Signed-off-by: Zhang, Weiwei1 <[email protected]> * sync format with lm-sys, refine preview image Signed-off-by: Zhang, Weiwei1 <[email protected]> --------- Signed-off-by: Zhang, Weiwei1 <[email protected]>
1 parent b7cfa79 commit 96bc902

File tree

5 files changed

+181
-0
lines changed

5 files changed

+181
-0
lines changed

blog/2025-11-13-AutoRound.md

Lines changed: 181 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,181 @@
1+
---
2+
title: "🚀 AutoRound Meets SGLang: Enabling Quantized Model Inference with AutoRound"
3+
author: "By Intel Neural Compressor Team"
4+
date: "November 14, 2025"
5+
previewImg: /images/blog/AutoRound/preview.png
6+
---
7+
8+
## Overview
9+
10+
We are thrilled to announce an official collaboration between [**SGLang**](https://github.com/sgl-project/sglang) and [**AutoRound**](https://github.com/intel/auto-round), enabling low-bit quantization for efficient LLM inference.
11+
12+
Through this integration, developers can now quantize large models with AutoRound’s signed-gradient optimization and directly deploy them in SGLang’s efficient runtime, achieving low-bit model inference with minimal accuracy loss and significant latency reduction.
13+
14+
15+
## What Is AutoRound?
16+
17+
AutoRound is an advanced post-training quantization (PTQ) toolkit designed for Large Language Models (**LLMs**) and Vision-Language Models (**VLMs**). It uses signed gradient descent to jointly optimize weight rounding and clipping ranges, enabling accurate low-bit quantization (e.g., INT2 - INT8) with minimal accuracy loss in most scenarios. For example, at INT2 precision, it outperforms popular baselines by up to 2.1x higher in relative accuracy. At INT4 precision, AutoRound continues to hold a competitive edge in most cases. The image below provides an overview of the core algorithm in AutoRound.
18+
19+
Full technical details are presented in the AutoRound paper:
20+
21+
👉 [Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs](https://arxiv.org/abs/2309.05516)
22+
23+
<p align="center">
24+
<img src="/images/blog/AutoRound/autoround_overview.png" width="80%">
25+
</p>
26+
<p align="center" style="color:gray; text-align: center;"><em>AutoRound algorithm overview</em></p>
27+
28+
Despite its robust performance, AutoRound remains fast and lightweight—quantizing a 72B model takes only 37 minutes on a single GPU under light mode.
29+
30+
It further supports mixed-bit tuning, lm-head quantization, GPTQ/AWQ/GGUF format exports, and customizable tuning recipes.
31+
32+
33+
34+
## AutoRound Highlights
35+
36+
AutoRound is not only focused on algorithmic innovation and exploration, but also widely recognized for its completeness in quantization engineering.
37+
38+
- **Accuracy:** deliver superior accuracy at low-bit precision
39+
<p align="center">
40+
<img src="/images/blog/AutoRound/int4_accs.png" width="80%">
41+
</p>
42+
<p align="center" style="color:gray; text-align: center;"><em>Average accuracy of 10+ tasks at INT4 weight</em></p>
43+
44+
- **Schemes:** support weight-only quantization, weight & activation quantization, dynamic and static for activation quantization
45+
- **Mixed-bits:** propose an effective algorithm to generate mixed-bits / other data types schemes in minutes
46+
- **Broad Compatibility:**
47+
- Support nearly all popular LLM architectures and over 10 vision-language models (VLMs)
48+
- Support Devices: CPU, Intel GPU, CUDA
49+
- Support Data Types: INT2 - INT8, MXFP4, NVFP4, FP8, and MXFP8
50+
- **Efficiency:** Enables block-wise tuning to lower VRAM usage without sacrificing throughput yet fast
51+
52+
<p align="center">
53+
<img src="/images/blog/AutoRound/timecost.png" width="80%">
54+
</p>
55+
<p align="center" style="color:gray; text-align: center;"><em>Quantization time cost comparison</em></p>
56+
57+
- **Community adoption:**
58+
- Work seamlessly with SGLang, TorchAO, Transformers, and vLLM
59+
- Widely used by HuggingFace model hubs such as [Intel](https://huggingface.co/Intel), [OPEA](https://huggingface.co/OPEA), [Kaitchup](https://huggingface.co/kaitchup), and [fbaldassarri](https://huggingface.co/fbaldassarri) with approximately two million downloads
60+
- **Export Formats:**
61+
- AutoRound
62+
- GPTQ
63+
- AWQ
64+
- GGUF
65+
- Compressed-tensor (initial support)
66+
67+
68+
## Integration Overview
69+
70+
SGLang provides a next-generation inference runtime built for scalable, low-latency LLM deployment. Its multi-modal, multi-GPU, and streaming execution model enables both chat and agentic reasoning workloads with exceptional efficiency.
71+
72+
SGLang’s flexible architecture now offers native hooks for quantized model loading, unlocking AutoRound’s full potential for deployment.
73+
74+
### **1. Quantize with AutoRound**
75+
76+
AutoRound automatically optimizes weight rounding and exports quantized weights that compatible with SGLang.
77+
78+
#### **1.1 API Usage**
79+
80+
```python
81+
# for LLM
82+
from auto_round import AutoRound
83+
model_id = "meta-llama/Llama-3.2-1B-Instruct"
84+
quant_path = "Llama-3.2-1B-Instruct-autoround-4bit"
85+
# Scheme examples: "W2A16", "W3A16", "W4A16", "W8A16", "NVFP4", "MXFP4" (no real kernels), "GGUF:Q4_K_M", etc.
86+
scheme = "W4A16"
87+
format = "auto_round"
88+
autoround = AutoRound(model_id, scheme=scheme)
89+
autoround.quantize_and_save(quant_path, format=format) # quantize and save
90+
```
91+
92+
#### **1.2 CMD Usage**
93+
```bash
94+
auto-round \
95+
--model Qwen/Qwen2-VL-2B-Instruct \
96+
--bits 4 \
97+
--group_size 128 \
98+
--format "auto_round" \
99+
--output_dir ./tmp_autoround
100+
```
101+
102+
### **2. Deploying with SGLang**
103+
104+
SGLang supports AutoRound-quantized models directly (Version>=v0.5.4.post2). It is compatible with SGLang-supported modeling architectures, including common LLM, VLM, and MoE models, and also supports inference and evaluation of AutoRound mixed-bit quantized models.
105+
106+
#### **2.1 OpenAI-Compatible Inference Usage**
107+
108+
```python
109+
from sglang.test.doc_patch import launch_server_cmd
110+
from sglang.utils import wait_for_server, print_highlight, terminate_process
111+
112+
# This is equivalent to running the following command in your terminal
113+
# python3 -m sglang.launch_server --model-path Intel/DeepSeek-R1-0528-Qwen3-8B-int4-AutoRound --host 0.0.0.0
114+
115+
server_process, port = launch_server_cmd(
116+
"""
117+
python3 -m sglang.launch_server --model-path Intel/DeepSeek-R1-0528-Qwen3-8B-int4-AutoRound \
118+
--host 0.0.0.0 --log-level warning
119+
"""
120+
)
121+
wait_for_server(f"http://localhost:{port}")
122+
```
123+
124+
#### **2.2 Offline Engine API Inference Usage**
125+
126+
127+
```python
128+
import sglang as sgl
129+
130+
llm = sgl.Engine(model_path="Intel/DeepSeek-R1-0528-Qwen3-8B-int4-AutoRound")
131+
132+
prompts = ["Hello, my name is"]
133+
sampling_params = {"temperature": 0.6, "top_p": 0.95}
134+
135+
outputs = llm.generate(prompts, sampling_params)
136+
for prompt, output in zip(prompts, outputs):
137+
print(f"Prompt: {prompt}\nGenerated text: {output['text']}")
138+
```
139+
140+
More flexible configurations and deployment options are waiting for you to explore!
141+
142+
143+
## Quantization Roadmap
144+
145+
AutoRound’s quantization benchmark results demonstrate robust accuracy retention at low precision. The results below highlight AutoRound’s strong advantages and potential in MXFP4, NVFP4, and mixed-bits model quantization. Note that the accuracy result is measured by average accuracy across *lambada_openai*, *hellaswag*, *piqa*, *winogrande*, and *mmlu* task.
146+
147+
As part of AutoRound roadmap, we plan to continue enhancing MXFP4 & NVFP4 accuracy for common models and auto mixed-bits quantization in the upcoming releases.
148+
149+
- MXFP4 & NVFP4 Quantization. RTN (Round-to-nearest) algorithm is baseline, and _'alg_ext'_ option indicates experimental optimization algorithms enabled.
150+
151+
| mxfp4 | llama3.1-8B-Instruct | Qwen2-7.5-Instruct | Phi4 | Qwen3-32B |
152+
|:-------------------|:----------------------:|:--------------------:|:---------:|:-----------:|
153+
| RTN | 0.6212 | 0.6550 | 0.7167 | 0.6901 |
154+
| AutoRound | 0.6686 | 0.6758 | 0.7247 | 0.7211 |
155+
| AutoRound+alg_ext | 0.6732 | 0.6809 | 0.7225 | 0.7201 |
156+
157+
158+
| nvfp4 | llama3.1-8B-Instruct | Qwen2-7.5-Instruct | Phi4 | Qwen3-32B |
159+
|:-------------------|:----------------------:|:--------------------:|:---------:|:-----------:|
160+
| RTN | 0.6876 | 0.6906 | 0.7296 | 0.7164 |
161+
| AutoRound | 0.6918 | 0.6973 | 0.7306 | 0.7306 |
162+
| AutoRound+alg_ext | 0.6965 | 0.6989 | 0.7318 | 0.7295 |
163+
164+
165+
- Auto MXFP4 & MXFP8 Mixed-Bits Quantization
166+
167+
| Average bits | Llama3.1-8B-I | Qwen2.5-7B-I | Qwen3-8B | Qwen3-32B |
168+
|:------------------|:----------------:|:----------------:|:----------------:|:----------------:|
169+
| **BF16** | 0.7076 (100%) | 0.7075 (100%) | 0.6764 (100%) | 0.7321 (100%) |
170+
| **4-bit** | 0.6626 (93.6%) | 0.6550 (92.6%) | 0.6316 (93.4%) | 0.6901 (94.3%) |
171+
| **4.5-bit** | 0.6808 (96.2%) | 0.6776 (95.8%) | 0.6550 (96.8%) | 0.7176 (98.0%) |
172+
| **5-bit** | 0.6857 (96.9%) | 0.6823 (96.4%) | 0.6594 (97.5%) | 0.7201 (98.3%) |
173+
| **6-bit** | 0.6975 (98.6%) | 0.6970 (98.5%) | 0.6716 (99.3%) | 0.7303 (99.8%) |
174+
175+
176+
177+
## Conclusion
178+
179+
The integration of AutoRound and SGLang marks a major milestone in efficient AI model deployment. This collaboration bridges precision optimization and runtime scalability, allowing developers to move seamlessly from quantization to real-time inference with minimal friction. AutoRound’s signed-gradient quantization ensures high fidelity even at extreme compression ratios, while SGLang’s high-throughput inference engine unlocks the full potential of low-bit execution across CPUs, GPUs, and multi-node clusters.
180+
181+
Looking forward, we aim to expand support for advanced quantization formats, optimize kernel efficiency, and bring AutoRound quantization into** **broader multimodal and agentic workloads. Together, AutoRound and SGLang are setting a new standard for intelligent, efficient, and scalable LLM deployment.
270 KB
Loading
100 KB
Loading
350 KB
Loading
78.1 KB
Loading

0 commit comments

Comments
 (0)