Skip to content

Commit 4fb156f

Browse files
Merge pull request #2556 from madeline-underwood/vllm2
Vllm_JA to sign off
2 parents d13829e + ebf51a2 commit 4fb156f

File tree

5 files changed

+149
-145
lines changed

5 files changed

+149
-145
lines changed

content/learning-paths/servers-and-cloud-computing/vllm-acceleration/1-overview-and-build.md

Lines changed: 61 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Overview and Optimized Build
2+
title: Build and validate vLLM for Arm64 inference on Azure Cobalt 100
33
weight: 2
44

55
### FIXED, DO NOT MODIFY
@@ -8,50 +8,57 @@ layout: learningpathall
88

99
## What is vLLM?
1010

11-
vLLM is an open-source, high-throughput inference and serving engine for large language models (LLMs).
12-
It’s designed to make LLM inference faster, more memory-efficient, and scalable, particularly during the prefill (context processing) and decode (token generation) phases of inference.
11+
vLLM is an open-source, high-throughput inference and serving engine for large language models (LLMs). It’s designed to make LLM inference faster, more memory-efficient, and scalable, particularly during the prefill (context processing) and decode (token generation) phases of inference.
1312

14-
### Key Features
15-
* Continuous Batching – Dynamically combines incoming inference requests into a single large batch, maximizing CPU/GPU utilization and throughput.
16-
* KV Cache Management – Efficiently stores and reuses key-value attention states, sustaining concurrency across multiple active sessions while minimizing memory overhead.
17-
* Token Streaming – Streams generated tokens as they are produced, enabling real-time responses for chat or API scenarios.
18-
### Interaction Modes
13+
## Key features
14+
* Continuous batching: dynamically merges incoming inference requests into larger batches, maximizing Arm CPU utilization and overall throughput
15+
* KV cache management: efficiently stores and reuses key-value attention states, sustaining concurrency across multiple active sessions while minimizing memory overhead
16+
* Token streaming: streams generated tokens as they are produced, enabling real-time responses for chat or API scenarios
17+
## Interaction modes
1918
You can use vLLM in two main ways:
20-
* OpenAI-Compatible REST Server:
21-
vLLM provides a /v1/chat/completions endpoint compatible with the OpenAI API schema, making it drop-in ready for tools like LangChain, LlamaIndex, and the official OpenAI Python SDK.
22-
* Python API:
23-
Load and serve models programmatically within your own Python scripts for flexible local inference and evaluation.
19+
- Using an OpenAI-Compatible REST Server: vLLM provides a /v1/chat/completions endpoint compatible with the OpenAI API schema, making it drop-in ready for tools like LangChain, LlamaIndex, and the official OpenAI Python SDK
20+
- Using a Python API: load and serve models programmatically within your own Python scripts for flexible local inference and evaluation
2421

2522
vLLM supports Hugging Face Transformer models out-of-the-box and scales seamlessly from single-prompt testing to production batch inference.
2623

27-
## What you build
24+
## What'll you build
2825

29-
In this learning path, you will build a CPU-optimized version of vLLM targeting the Arm64 architecture, integrated with oneDNN and the Arm Compute Library (ACL).
26+
In this Learning Path, you'll build a CPU-optimized version of vLLM targeting the Arm64 architecture, integrated with oneDNN and the Arm Compute Library (ACL).
3027
This build enables high-performance LLM inference on Arm servers, leveraging specialized Arm math libraries and kernel optimizations.
3128
After compiling, you’ll validate your build by running a local chat example to confirm functionality and measure baseline inference speed.
3229

3330
## Why this is fast on Arm
3431

32+
vLLM achieves high performance on Arm servers by combining software and hardware optimizations. Here’s why your build runs fast:
33+
34+
- Arm-optimized kernels: vLLM uses oneDNN and the Arm Compute Library to accelerate matrix multiplications, normalization, and activation functions. These libraries are tuned for Arm’s aarch64 architecture.
35+
- Efficient quantization: INT4 quantized models run faster on Arm because KleidiAI microkernels use DOT-product instructions (SDOT/UDOT) available on Arm CPUs.
36+
- Paged attention tuning: the paged attention mechanism is optimized for Arm’s NEON and SVE pipelines, improving token reuse and throughput during long-sequence generation.
37+
- MoE fusion: for Mixture-of-Experts models, vLLM fuses INT4 expert layers to reduce memory transfers and bandwidth bottlenecks.
38+
- Thread affinity and memory allocation: setting thread affinity ensures balanced CPU core usage, while tcmalloc reduces memory fragmentation and allocator contention.
39+
40+
These optimizations work together to deliver higher throughput and lower latency for LLM inference on Arm servers.
41+
3542
vLLM’s performance on Arm servers is driven by both software optimization and hardware-level acceleration.
3643
Each component of this optimized build contributes to higher throughput and lower latency during inference:
3744

38-
- Optimized kernels: The aarch64 vLLM build uses direct oneDNN with the Arm Compute Library for key operations.
45+
- Optimized kernels: the aarch64 vLLM build uses direct oneDNN with the Arm Compute Library for key operations.
3946
- 4‑bit weight quantization: vLLM supports INT4 quantized models, and Arm accelerates this using KleidiAI microkernels, which take advantage of DOT-product (SDOT/UDOT) instructions.
40-
- Efficient MoE execution: For Mixture-of-Experts (MoE) models, vLLM fuses INT4 quantized expert layers to reduce intermediate memory transfers, which minimizes bandwidth bottlenecks
41-
- Optimized Paged attention: The paged attention mechanism, which handles token reuse during long-sequence generation, is SIMD-tuned for Arm’s NEON and SVE (Scalable Vector Extension) pipelines.
42-
- System tuning: Using thread affinity ensures efficient CPU core pinning and balanced thread scheduling across Arm clusters.
47+
- Efficient MoE execution: for Mixture-of-Experts (MoE) models, vLLM fuses INT4 quantized expert layers to reduce intermediate memory transfers, which minimizes bandwidth bottlenecks
48+
- Optimized Paged attention: the paged attention mechanism, which handles token reuse during long-sequence generation, is SIMD-tuned for Arm’s NEON and SVE (Scalable Vector Extension) pipelines.
49+
- System tuning: using thread affinity ensures efficient CPU core pinning and balanced thread scheduling across Arm clusters.
4350
Additionally, enabling tcmalloc (Thread-Caching Malloc) reduces allocator contention and memory fragmentation under high-throughput serving loads.
4451

45-
## Before you begin
52+
## Set up your environment
4653

47-
Verify that your environment meets the following requirements:
54+
Before you begin, make sure your environment meets these requirements:
4855

49-
Python version: Use Python 3.12 on Ubuntu 22.04 LTS or later.
50-
Hardware requirements: At least 32 vCPUs, 64 GB RAM, and 64 GB of free disk space.
56+
- Python 3.12 on Ubuntu 22.04 LTS or newer
57+
- At least 32 vCPUs, 64 GB RAM, and 64 GB of free disk space
5158

52-
This Learning Path was validated on an AWS Graviton4 c8g.12xlarge instance with 64 GB of attached storage.
59+
This Learning Path was tested on an AWS Graviton4 c8g.12xlarge instance with 64 GB of attached storage.
5360

54-
### Install Build Dependencies
61+
## Install build dependencies
5562

5663
Install the following packages required for compiling vLLM and its dependencies on Arm64:
5764
```bash
@@ -74,7 +81,7 @@ This ensures optimized Arm kernels are used for matrix multiplications, layer no
7481
## Build vLLM for Arm64 CPU
7582
You’ll now build vLLM optimized for Arm (aarch64) servers with oneDNN and the Arm Compute Library (ACL) automatically enabled in the CPU backend.
7683

77-
1. Create and Activate a Python Virtual Environment
84+
## Create and activate a Python virtual environment
7885
It’s best practice to build vLLM inside an isolated environment to prevent conflicts between system and project dependencies:
7986

8087
```bash
@@ -83,7 +90,7 @@ source vllm_env/bin/activate
8390
python3 -m pip install --upgrade pip
8491
```
8592

86-
2. Clone vLLM and Install Build Requirements
93+
## Clone vLLM and install build requirements
8794
Download the official vLLM source code and install its CPU-specific build dependencies:
8895

8996
```bash
@@ -94,15 +101,15 @@ pip install -r requirements/cpu.txt -r requirements/cpu-build.txt
94101
```
95102
The specific commit (5fb4137) pins a verified version of vLLM that officially adds Arm CPUs to the list of supported build targets, ensuring full compatibility and optimized performance for Arm-based systems.
96103

97-
3. Build the vLLM Wheel for CPU
104+
## Build the vLLM wheel for CPU
98105
Run the following command to compile and package vLLM as a Python wheel optimized for CPU inference:
99106

100107
```bash
101108
VLLM_TARGET_DEVICE=cpu python3 setup.py bdist_wheel
102109
```
103110
The output wheel will appear under dist/ and include all compiled C++/PyBind modules.
104111

105-
4. Install the Wheel
112+
## Install the wheel
106113
Install the freshly built wheel into your active environment:
107114

108115
```bash
@@ -115,7 +122,31 @@ Do not delete the local vLLM source directory.
115122
The repository contains C++ extensions and runtime libraries required for correct CPU inference on aarch64 after wheel installation.
116123
{{% /notice %}}
117124

118-
## Quick validation via Offline Inferencing
125+
## Validate your build with offline inference
126+
127+
Run a quick test to confirm your Arm-optimized vLLM build works as expected. Use the built-in chat example to perform offline inference and verify that oneDNN and Arm Compute Library optimizations are active.
128+
129+
```bash
130+
python examples/offline_inference/basic/chat.py \
131+
--dtype=bfloat16 \
132+
--model TinyLlama/TinyLlama-1.1B-Chat-v1.0
133+
```
134+
135+
This command runs a small Hugging Face model in bfloat16 precision, streaming generated tokens to the console. You should see output similar to:
136+
137+
```output
138+
Generated Outputs:
139+
--------------------------------------------------------------------------------
140+
Prompt: None
141+
142+
Generated text: 'The Importance of Higher Education\n\nHigher education is a fundamental right'
143+
--------------------------------------------------------------------------------
144+
Adding requests: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 9552.05it/s]
145+
Processed prompts: 100%|████████████████████████████████████████████████████████████████████████| 10/10 [00:01<00:00, 6.78it/s, est. speed input: 474.32 toks/s, output: 108.42 toks/s]
146+
...
147+
```
148+
149+
If you see token streaming and generated text, your vLLM build is correctly configured for Arm64 inference.
119150

120151
Once your Arm-optimized vLLM build completes, you can validate it by running a small offline inference example. This ensures that the CPU-specific backend and oneDNN and ACL optimizations were correctly compiled into your build.
121152
Run the built-in chat example included in the vLLM repository:
@@ -144,7 +175,7 @@ Processed prompts: 100%|██████████████████
144175
```
145176

146177
{{% notice Note %}}
147-
As CPU support in vLLM continues to mature, these manual build steps will eventually be replaced by a streamlined pip install workflow for aarch64, simplifying future deployments on Arm servers.
178+
As CPU support in vLLM continues to mature, these manual build steps will eventually be replaced by a streamlined `pip` install workflow for aarch64, simplifying future deployments on Arm servers.
148179
{{% /notice %}}
149180

150181
You have now verified that your vLLM Arm64 build runs correctly and performs inference using Arm-optimized kernels.

content/learning-paths/servers-and-cloud-computing/vllm-acceleration/2-quantize-model.md

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,11 @@
11
---
2-
title: Quantize an LLM to INT4 for Arm Platform
2+
title: Quantize an LLM to INT4
33
weight: 3
44

55
### FIXED, DO NOT MODIFY
66
layout: learningpathall
77
---
8-
## Accelerating LLMs with 4-bit Quantization
8+
## Accelerate LLMs with 4-bit quantization
99

1010
You can accelerate many LLMs on Arm CPUs with 4‑bit quantization. In this section, you’ll quantize the deepseek-ai/DeepSeek-V2-Lite model to 4-bit integer (INT4) weights.
1111
The quantized model runs efficiently through vLLM’s INT4 inference path, which is accelerated by Arm KleidiAI microkernels.
@@ -35,7 +35,7 @@ If the model you plan to quantize is gated on Hugging Face (e.g., DeepSeek or pr
3535
huggingface-cli login
3636
```
3737

38-
## INT4 Quantization Recipe
38+
## Apply the INT4 quantization recipe
3939

4040
Using a file editor of your choice, save the following code into a file named `quantize_vllm_models.py`:
4141

@@ -134,12 +134,16 @@ This script creates a Arm KleidiAI INT4 quantized copy of the vLLM model and sav
134134

135135
## Quantize DeepSeek‑V2‑Lite model
136136

137-
### Quantization parameter tuning
138-
Quantization parameters determine how the model’s floating-point weights and activations are converted into lower-precision integer formats. Choosing the right combination is essential for balancing model accuracy, memory footprint, and runtime throughput on Arm CPUs.
137+
Quantizing your model to INT4 format significantly reduces memory usage and improves inference speed on Arm CPUs. In this section, you'll apply the quantization script to the DeepSeek‑V2‑Lite model, tuning key parameters for optimal performance and accuracy. This process prepares your model for efficient deployment with vLLM on Arm-based servers.
139138

140-
1. You can choose `minmax` (faster model quantization) or `mse` (more accurate but slower model quantization) method.
141-
2. `channelwise` is a good default for most models.
142-
3. `groupwise` can improve accuracy further; `--groupsize 32` is common.
139+
## Tune quantization parameters
140+
Quantization parameters control how the model’s floating-point weights and activations are converted to lower-precision integer formats. The right settings help you balance accuracy, memory usage, and performance on Arm CPUs.
141+
142+
- Use `minmax` for faster quantization, or `mse` for higher accuracy (but slower)
143+
- Choose `channelwise` for most models; it’s a reliable default
144+
- Try `groupwise` for potentially better accuracy; `--groupsize 32` is a common choice
145+
146+
Pick the combination that fits your accuracy and speed needs.
143147

144148
Execute the following command to quantize the DeepSeek-V2-Lite model:
145149

content/learning-paths/servers-and-cloud-computing/vllm-acceleration/3-run-inference-and-serve.md

Lines changed: 16 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -9,17 +9,17 @@ layout: learningpathall
99
## Batch Sizing in vLLM
1010

1111
vLLM uses dynamic continuous batching to maximize hardware utilization. Two key parameters govern this process:
12-
* `max_model_len` — The maximum sequence length (number of tokens per request).
12+
* `max_model_len`, which is the maximum sequence length (number of tokens per request).
1313
No single prompt or generated sequence can exceed this limit.
14-
* `max_num_batched_tokens` — The total number of tokens processed in one batch across all requests.
14+
* `max_num_batched_tokens`, which is the total number of tokens processed in one batch across all requests.
1515
The sum of input and output tokens from all concurrent requests must stay within this limit.
1616

1717
Together, these parameters determine how much memory the model can use and how effectively CPU threads are saturated.
1818
On Arm-based servers, tuning them helps achieve stable throughput while avoiding excessive paging or cache thrashing.
1919

2020
## Serve an OpenAI‑compatible API
2121

22-
Start vLLM’s OpenAI-compatible API server using the quantized INT4 model and environment variables optimized for performance.
22+
Start vLLM’s OpenAI-compatible API server using the quantized INT4 model and environment variables optimized for performance:
2323

2424
```bash
2525
export VLLM_TARGET_DEVICE=cpu
@@ -125,9 +125,9 @@ This validates multi‑request behavior and shows aggregate throughput in the se
125125
(APIServer pid=4474) INFO: 127.0.0.1:44120 - "POST /v1/chat/completions HTTP/1.1" 200 OK
126126
(APIServer pid=4474) INFO 11-10 01:01:06 [loggers.py:221] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 57.5 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
127127
```
128-
## Optional: Serve a BF16 (Non-Quantized) Model
128+
## Serve a BF16 (non-quantized) model (optional)
129129

130-
For a non-quantized path, vLLM on Arm can run BF16 end-to-end using its oneDNN integration (which routes to Arm-optimized kernels via ACL under aarch64).
130+
For a non-quantized path, vLLM on Arm can run BF16 end-to-end using its oneDNN integration (which routes to Arm-optimized kernels using ACL under aarch64).
131131

132132
```bash
133133
vllm serve deepseek-ai/DeepSeek-V2-Lite \
@@ -136,17 +136,18 @@ vllm serve deepseek-ai/DeepSeek-V2-Lite \
136136
```
137137
Use this BF16 setup to establish a quality reference baseline, then compare throughput and latency against your INT4 deployment to quantify the performance/accuracy trade-offs on your Arm system.
138138

139-
## Go Beyond: Power Up Your vLLM Workflow
139+
## Go beyond: power up your vLLM workflow
140140
Now that you’ve successfully quantized, served, and benchmarked a model using vLLM on Arm, you can build on what you’ve learned to push performance, scalability, and usability even further.
141141

142-
**Try Different Models**
143-
Extend your workflow to other models on Hugging Face that are compatible with vLLM and can benefit from Arm acceleration:
144-
* Meta Llama 2 / Llama 3 – Strong general-purpose baselines; excellent for comparing BF16 vs INT4 performance.
145-
* Qwen / Qwen-Chat – High-quality multilingual and instruction-tuned models.
146-
* Gemma (Google) – Compact and efficient architecture; ideal for edge or cost-optimized serving.
147-
148-
You can quantize and serve them using the same `quantize_vllm_models.py` recipe, just update the model name.
142+
## Try different models
143+
Explore other Hugging Face models that work well with vLLM and take advantage of Arm acceleration:
149144

150-
**Connect a chat client:** Link your server with OpenAI-compatible UIs like [Open WebUI](https://github.com/open-webui/open-webui)
145+
- Meta Llama 2 and Llama 3: these versatile models work well for general tasks, and you can try them to compare BF16 and INT4 performance
146+
- Qwen and Qwen-Chat: these models support multiple languages and are tuned for instructions, giving you high-quality results
147+
- Gemma (Google): this compact and efficient model is a good choice for edge devices or deployments where cost matters
151148

152-
You can continue exploring how Arm’s efficiency, oneDNN+ACL acceleration, and vLLM’s dynamic batching combine to deliver fast, sustainable, and scalable AI inference on modern Arm architectures.
149+
You can quantize and serve any of these models using the same `quantize_vllm_models.py` script. Just update the model name in the script.
150+
151+
You can also try connecting a chat client by linking your server with OpenAI-compatible user interfaces such as [Open WebUI](https://github.com/open-webui/open-webui).
152+
153+
Continue exploring how Arm efficiency, oneDNN and ACL acceleration, and vLLM dynamic batching work together to provide fast, sustainable, and scalable AI inference on modern Arm architectures.

0 commit comments

Comments
 (0)