You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/vllm-acceleration/1-overview-and-build.md
+61-30Lines changed: 61 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
---
2
-
title: Overview and Optimized Build
2
+
title: Build and validate vLLM for Arm64 inference on Azure Cobalt 100
3
3
weight: 2
4
4
5
5
### FIXED, DO NOT MODIFY
@@ -8,50 +8,57 @@ layout: learningpathall
8
8
9
9
## What is vLLM?
10
10
11
-
vLLM is an open-source, high-throughput inference and serving engine for large language models (LLMs).
12
-
It’s designed to make LLM inference faster, more memory-efficient, and scalable, particularly during the prefill (context processing) and decode (token generation) phases of inference.
11
+
vLLM is an open-source, high-throughput inference and serving engine for large language models (LLMs). It’s designed to make LLM inference faster, more memory-efficient, and scalable, particularly during the prefill (context processing) and decode (token generation) phases of inference.
13
12
14
-
###Key Features
15
-
* Continuous Batching – Dynamically combines incoming inference requests into a single large batch, maximizing CPU/GPU utilization and throughput.
16
-
* KV Cache Management – Efficiently stores and reuses key-value attention states, sustaining concurrency across multiple active sessions while minimizing memory overhead.
17
-
* Token Streaming – Streams generated tokens as they are produced, enabling real-time responses for chat or API scenarios.
18
-
###Interaction Modes
13
+
## Key features
14
+
* Continuous batching: dynamically merges incoming inference requests into larger batches, maximizing Arm CPU utilization and overall throughput
15
+
* KV cache management: efficiently stores and reuses key-value attention states, sustaining concurrency across multiple active sessions while minimizing memory overhead
16
+
* Token streaming: streams generated tokens as they are produced, enabling real-time responses for chat or API scenarios
17
+
## Interaction modes
19
18
You can use vLLM in two main ways:
20
-
* OpenAI-Compatible REST Server:
21
-
vLLM provides a /v1/chat/completions endpoint compatible with the OpenAI API schema, making it drop-in ready for tools like LangChain, LlamaIndex, and the official OpenAI Python SDK.
22
-
* Python API:
23
-
Load and serve models programmatically within your own Python scripts for flexible local inference and evaluation.
19
+
- Using an OpenAI-Compatible REST Server: vLLM provides a /v1/chat/completions endpoint compatible with the OpenAI API schema, making it drop-in ready for tools like LangChain, LlamaIndex, and the official OpenAI Python SDK
20
+
- Using a Python API: load and serve models programmatically within your own Python scripts for flexible local inference and evaluation
24
21
25
22
vLLM supports Hugging Face Transformer models out-of-the-box and scales seamlessly from single-prompt testing to production batch inference.
26
23
27
-
## What you build
24
+
## What'll you build
28
25
29
-
In this learning path, you will build a CPU-optimized version of vLLM targeting the Arm64 architecture, integrated with oneDNN and the Arm Compute Library (ACL).
26
+
In this Learning Path, you'll build a CPU-optimized version of vLLM targeting the Arm64 architecture, integrated with oneDNN and the Arm Compute Library (ACL).
30
27
This build enables high-performance LLM inference on Arm servers, leveraging specialized Arm math libraries and kernel optimizations.
31
28
After compiling, you’ll validate your build by running a local chat example to confirm functionality and measure baseline inference speed.
32
29
33
30
## Why this is fast on Arm
34
31
32
+
vLLM achieves high performance on Arm servers by combining software and hardware optimizations. Here’s why your build runs fast:
33
+
34
+
- Arm-optimized kernels: vLLM uses oneDNN and the Arm Compute Library to accelerate matrix multiplications, normalization, and activation functions. These libraries are tuned for Arm’s aarch64 architecture.
35
+
- Efficient quantization: INT4 quantized models run faster on Arm because KleidiAI microkernels use DOT-product instructions (SDOT/UDOT) available on Arm CPUs.
36
+
- Paged attention tuning: the paged attention mechanism is optimized for Arm’s NEON and SVE pipelines, improving token reuse and throughput during long-sequence generation.
37
+
- MoE fusion: for Mixture-of-Experts models, vLLM fuses INT4 expert layers to reduce memory transfers and bandwidth bottlenecks.
38
+
- Thread affinity and memory allocation: setting thread affinity ensures balanced CPU core usage, while tcmalloc reduces memory fragmentation and allocator contention.
39
+
40
+
These optimizations work together to deliver higher throughput and lower latency for LLM inference on Arm servers.
41
+
35
42
vLLM’s performance on Arm servers is driven by both software optimization and hardware-level acceleration.
36
43
Each component of this optimized build contributes to higher throughput and lower latency during inference:
37
44
38
-
- Optimized kernels: The aarch64 vLLM build uses direct oneDNN with the Arm Compute Library for key operations.
45
+
- Optimized kernels: the aarch64 vLLM build uses direct oneDNN with the Arm Compute Library for key operations.
39
46
- 4‑bit weight quantization: vLLM supports INT4 quantized models, and Arm accelerates this using KleidiAI microkernels, which take advantage of DOT-product (SDOT/UDOT) instructions.
40
-
- Efficient MoE execution: For Mixture-of-Experts (MoE) models, vLLM fuses INT4 quantized expert layers to reduce intermediate memory transfers, which minimizes bandwidth bottlenecks
41
-
- Optimized Paged attention: The paged attention mechanism, which handles token reuse during long-sequence generation, is SIMD-tuned for Arm’s NEON and SVE (Scalable Vector Extension) pipelines.
42
-
- System tuning: Using thread affinity ensures efficient CPU core pinning and balanced thread scheduling across Arm clusters.
47
+
- Efficient MoE execution: for Mixture-of-Experts (MoE) models, vLLM fuses INT4 quantized expert layers to reduce intermediate memory transfers, which minimizes bandwidth bottlenecks
48
+
- Optimized Paged attention: the paged attention mechanism, which handles token reuse during long-sequence generation, is SIMD-tuned for Arm’s NEON and SVE (Scalable Vector Extension) pipelines.
49
+
- System tuning: using thread affinity ensures efficient CPU core pinning and balanced thread scheduling across Arm clusters.
43
50
Additionally, enabling tcmalloc (Thread-Caching Malloc) reduces allocator contention and memory fragmentation under high-throughput serving loads.
44
51
45
-
## Before you begin
52
+
## Set up your environment
46
53
47
-
Verify that your environment meets the following requirements:
54
+
Before you begin, make sure your environment meets these requirements:
48
55
49
-
Python version: Use Python 3.12 on Ubuntu 22.04 LTS or later.
50
-
Hardware requirements: At least 32 vCPUs, 64 GB RAM, and 64 GB of free disk space.
56
+
-Python 3.12 on Ubuntu 22.04 LTS or newer
57
+
-At least 32 vCPUs, 64 GB RAM, and 64 GB of free disk space
51
58
52
-
This Learning Path was validated on an AWS Graviton4 c8g.12xlarge instance with 64 GB of attached storage.
59
+
This Learning Path was tested on an AWS Graviton4 c8g.12xlarge instance with 64 GB of attached storage.
53
60
54
-
###Install Build Dependencies
61
+
## Install build dependencies
55
62
56
63
Install the following packages required for compiling vLLM and its dependencies on Arm64:
57
64
```bash
@@ -74,7 +81,7 @@ This ensures optimized Arm kernels are used for matrix multiplications, layer no
74
81
## Build vLLM for Arm64 CPU
75
82
You’ll now build vLLM optimized for Arm (aarch64) servers with oneDNN and the Arm Compute Library (ACL) automatically enabled in the CPU backend.
76
83
77
-
1. Create and Activate a Python Virtual Environment
84
+
##Create and activate a Python virtual environment
78
85
It’s best practice to build vLLM inside an isolated environment to prevent conflicts between system and project dependencies:
79
86
80
87
```bash
@@ -83,7 +90,7 @@ source vllm_env/bin/activate
83
90
python3 -m pip install --upgrade pip
84
91
```
85
92
86
-
2. Clone vLLM and Install Build Requirements
93
+
##Clone vLLM and install build requirements
87
94
Download the official vLLM source code and install its CPU-specific build dependencies:
The specific commit (5fb4137) pins a verified version of vLLM that officially adds Arm CPUs to the list of supported build targets, ensuring full compatibility and optimized performance for Arm-based systems.
96
103
97
-
3. Build the vLLM Wheel for CPU
104
+
##Build the vLLM wheel for CPU
98
105
Run the following command to compile and package vLLM as a Python wheel optimized for CPU inference:
The output wheel will appear under dist/ and include all compiled C++/PyBind modules.
104
111
105
-
4. Install the Wheel
112
+
##Install the wheel
106
113
Install the freshly built wheel into your active environment:
107
114
108
115
```bash
@@ -115,7 +122,31 @@ Do not delete the local vLLM source directory.
115
122
The repository contains C++ extensions and runtime libraries required for correct CPU inference on aarch64 after wheel installation.
116
123
{{% /notice %}}
117
124
118
-
## Quick validation via Offline Inferencing
125
+
## Validate your build with offline inference
126
+
127
+
Run a quick test to confirm your Arm-optimized vLLM build works as expected. Use the built-in chat example to perform offline inference and verify that oneDNN and Arm Compute Library optimizations are active.
128
+
129
+
```bash
130
+
python examples/offline_inference/basic/chat.py \
131
+
--dtype=bfloat16 \
132
+
--model TinyLlama/TinyLlama-1.1B-Chat-v1.0
133
+
```
134
+
135
+
This command runs a small Hugging Face model in bfloat16 precision, streaming generated tokens to the console. You should see output similar to:
If you see token streaming and generated text, your vLLM build is correctly configured for Arm64 inference.
119
150
120
151
Once your Arm-optimized vLLM build completes, you can validate it by running a small offline inference example. This ensures that the CPU-specific backend and oneDNN and ACL optimizations were correctly compiled into your build.
121
152
Run the built-in chat example included in the vLLM repository:
As CPU support in vLLM continues to mature, these manual build steps will eventually be replaced by a streamlined pip install workflow for aarch64, simplifying future deployments on Arm servers.
178
+
As CPU support in vLLM continues to mature, these manual build steps will eventually be replaced by a streamlined `pip` install workflow for aarch64, simplifying future deployments on Arm servers.
148
179
{{% /notice %}}
149
180
150
181
You have now verified that your vLLM Arm64 build runs correctly and performs inference using Arm-optimized kernels.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/vllm-acceleration/2-quantize-model.md
+12-8Lines changed: 12 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
---
2
-
title: Quantize an LLM to INT4 for Arm Platform
2
+
title: Quantize an LLM to INT4
3
3
weight: 3
4
4
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
-
## Accelerating LLMs with 4-bit Quantization
8
+
## Accelerate LLMs with 4-bit quantization
9
9
10
10
You can accelerate many LLMs on Arm CPUs with 4‑bit quantization. In this section, you’ll quantize the deepseek-ai/DeepSeek-V2-Lite model to 4-bit integer (INT4) weights.
11
11
The quantized model runs efficiently through vLLM’s INT4 inference path, which is accelerated by Arm KleidiAI microkernels.
@@ -35,7 +35,7 @@ If the model you plan to quantize is gated on Hugging Face (e.g., DeepSeek or pr
35
35
huggingface-cli login
36
36
```
37
37
38
-
## INT4 Quantization Recipe
38
+
## Apply the INT4 quantization recipe
39
39
40
40
Using a file editor of your choice, save the following code into a file named `quantize_vllm_models.py`:
41
41
@@ -134,12 +134,16 @@ This script creates a Arm KleidiAI INT4 quantized copy of the vLLM model and sav
134
134
135
135
## Quantize DeepSeek‑V2‑Lite model
136
136
137
-
### Quantization parameter tuning
138
-
Quantization parameters determine how the model’s floating-point weights and activations are converted into lower-precision integer formats. Choosing the right combination is essential for balancing model accuracy, memory footprint, and runtime throughput on Arm CPUs.
137
+
Quantizing your model to INT4 format significantly reduces memory usage and improves inference speed on Arm CPUs. In this section, you'll apply the quantization script to the DeepSeek‑V2‑Lite model, tuning key parameters for optimal performance and accuracy. This process prepares your model for efficient deployment with vLLM on Arm-based servers.
139
138
140
-
1. You can choose `minmax` (faster model quantization) or `mse` (more accurate but slower model quantization) method.
141
-
2.`channelwise` is a good default for most models.
142
-
3.`groupwise` can improve accuracy further; `--groupsize 32` is common.
139
+
## Tune quantization parameters
140
+
Quantization parameters control how the model’s floating-point weights and activations are converted to lower-precision integer formats. The right settings help you balance accuracy, memory usage, and performance on Arm CPUs.
141
+
142
+
- Use `minmax` for faster quantization, or `mse` for higher accuracy (but slower)
143
+
- Choose `channelwise` for most models; it’s a reliable default
144
+
- Try `groupwise` for potentially better accuracy; `--groupsize 32` is a common choice
145
+
146
+
Pick the combination that fits your accuracy and speed needs.
143
147
144
148
Execute the following command to quantize the DeepSeek-V2-Lite model:
For a non-quantized path, vLLM on Arm can run BF16 end-to-end using its oneDNN integration (which routes to Arm-optimized kernels via ACL under aarch64).
130
+
For a non-quantized path, vLLM on Arm can run BF16 end-to-end using its oneDNN integration (which routes to Arm-optimized kernels using ACL under aarch64).
Use this BF16 setup to establish a quality reference baseline, then compare throughput and latency against your INT4 deployment to quantify the performance/accuracy trade-offs on your Arm system.
138
138
139
-
## Go Beyond: Power Up Your vLLM Workflow
139
+
## Go beyond: power up your vLLM workflow
140
140
Now that you’ve successfully quantized, served, and benchmarked a model using vLLM on Arm, you can build on what you’ve learned to push performance, scalability, and usability even further.
141
141
142
-
**Try Different Models**
143
-
Extend your workflow to other models on Hugging Face that are compatible with vLLM and can benefit from Arm acceleration:
144
-
* Meta Llama 2 / Llama 3 – Strong general-purpose baselines; excellent for comparing BF16 vs INT4 performance.
145
-
* Qwen / Qwen-Chat – High-quality multilingual and instruction-tuned models.
146
-
* Gemma (Google) – Compact and efficient architecture; ideal for edge or cost-optimized serving.
147
-
148
-
You can quantize and serve them using the same `quantize_vllm_models.py` recipe, just update the model name.
142
+
## Try different models
143
+
Explore other Hugging Face models that work well with vLLM and take advantage of Arm acceleration:
149
144
150
-
**Connect a chat client:** Link your server with OpenAI-compatible UIs like [Open WebUI](https://github.com/open-webui/open-webui)
145
+
- Meta Llama 2 and Llama 3: these versatile models work well for general tasks, and you can try them to compare BF16 and INT4 performance
146
+
- Qwen and Qwen-Chat: these models support multiple languages and are tuned for instructions, giving you high-quality results
147
+
- Gemma (Google): this compact and efficient model is a good choice for edge devices or deployments where cost matters
151
148
152
-
You can continue exploring how Arm’s efficiency, oneDNN+ACL acceleration, and vLLM’s dynamic batching combine to deliver fast, sustainable, and scalable AI inference on modern Arm architectures.
149
+
You can quantize and serve any of these models using the same `quantize_vllm_models.py` script. Just update the model name in the script.
150
+
151
+
You can also try connecting a chat client by linking your server with OpenAI-compatible user interfaces such as [Open WebUI](https://github.com/open-webui/open-webui).
152
+
153
+
Continue exploring how Arm efficiency, oneDNN and ACL acceleration, and vLLM dynamic batching work together to provide fast, sustainable, and scalable AI inference on modern Arm architectures.
0 commit comments