Skip to content

Commit 66c2cea

Browse files
authored
Merge pull request #2548 from pareenaverma/content_review
vllm and flink tech review
2 parents fcff75a + f58cdf5 commit 66c2cea

File tree

4 files changed

+90
-32
lines changed

4 files changed

+90
-32
lines changed

content/learning-paths/servers-and-cloud-computing/flink-on-gcp/baseline.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,17 @@ Verify the Maven installation:
3333
```console
3434
mvn -version
3535
```
36+
37+
The output should look like:
38+
39+
```output
40+
pache Maven 3.8.6 (84538c9988a25aec085021c365c560670ad80f63)
41+
Maven home: /opt/maven
42+
Java version: 17.0.13, vendor: N/A, runtime: /usr/lib64/jvm/java-17-openjdk-17
43+
Default locale: en, platform encoding: UTF-8
44+
OS name: "linux", version: "5.14.21-150500.55.124-default", arch: "aarch64", family: "unix"
45+
```
46+
3647
At this point, both Java and Maven are installed and ready to use.
3748

3849
### Start the Flink Cluster

content/learning-paths/servers-and-cloud-computing/flink-on-gcp/installation.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Next, download the pre-built binary package for **Apache Flink** from the offici
2424

2525
```console
2626
cd /opt
27-
sudo wget https://dlcdn.apache.org/flink/flink-2.1.0/flink-2.1.0-bin-scala_2.12.tgz
27+
sudo wget https://dlcdn.apache.org/flink/flink-2.1.1/flink-2.1.1-bin-scala_2.12.tgz
2828
```
2929
This command retrieves the official Flink binary distribution for installation on your VM.
3030

@@ -39,15 +39,15 @@ The [Arm Ecosystem Dashboard](https://developer.arm.com/ecosystem-dashboard/) re
3939
Extract the downloaded `.tgz` archive to make the Flink files accessible for configuration.
4040

4141
```console
42-
sudo tar -xvzf flink-2.1.0-bin-scala_2.12.tgz
42+
sudo tar -xvzf flink-2.1.1-bin-scala_2.12.tgz
4343
```
44-
After extraction, you will have a directory named `flink-2.1.0` under `/opt`.
44+
After extraction, you will have a directory named `flink-2.1.1` under `/opt`.
4545

4646
**Rename the extracted directory for convenience:**
4747
For easier access and management, rename the extracted Flink directory to a simple name like `/opt/flink`.
4848

4949
```console
50-
sudo mv flink-2.1.0 /opt/flink
50+
sudo mv flink-2.1.1 /opt/flink
5151
```
5252
This makes future references to your Flink installation path simpler and more consistent.
5353

@@ -82,6 +82,6 @@ flink -v
8282
You should see an output similar to:
8383

8484
```output
85-
Version: 2.1.0, Commit ID: 4cb6bd3
85+
Version: 2.1.1, Commit ID: 074f8c5
8686
```
8787
This confirms that Apache Flink has been installed and is ready for use.

content/learning-paths/servers-and-cloud-computing/vllm-acceleration/4-accuracy-benchmarking.md

Lines changed: 73 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -8,39 +8,48 @@ layout: learningpathall
88

99
## Why accuracy benchmarking
1010

11-
The lm-evaluation-harness is the standard way to measure model accuracy across common academic benchmarks (for example, MMLU, HellaSwag, GSM8K) and runtimes (Hugging Face, vLLM, llama.cpp, etc.). In this module, you will run accuracy tests for both BF16 and INT4 deployments of your model served by vLLM on Arm-based servers.
11+
The LM Evaluation Harness (lm-eval-harness) is a widely used open-source framework for evaluating the accuracy of large language models on standardized academic benchmarks such as MMLU, HellaSwag, and GSM8K.
12+
It provides a consistent interface for evaluating models served through various runtimes—such as Hugging Face Transformers, vLLM, or llama.cpp using the same datasets, few-shot templates, and scoring metrics.
13+
In this module, you will measure how quantization impacts model quality by comparing BF16 (non-quantized) and INT4 (quantized) versions of your model running on Arm-based servers.
1214

1315
You will:
14-
* Install lm-eval-harness with vLLM support
15-
* Run benchmarks on a BF16 model and an INT4 (weight-quantized) model
16-
* Interpret key metrics and compare quality across precisions
16+
* Install lm-eval-harness with vLLM backend support.
17+
* Run benchmark tasks on both BF16 and INT4 model deployments.
18+
* Analyze and interpret accuracy differences between the two precisions.
1719

1820
{{% notice Note %}}
19-
Results depend on CPU, dataset versions, and model choice. Use the same tasks and few-shot settings when comparing BF16 and INT4 to ensure a fair comparison.
21+
Accuracy results can vary depending on CPU, dataset versions, and model choice. Use the same tasks, few-shot settings and evaluation batch size when comparing BF16 and INT4 results to ensure a fair comparison.
2022
{{% /notice %}}
2123

2224
## Prerequisites
2325

24-
Before you start:
25-
* Complete the optimized build in “Overview and Optimized Build” and validate your vLLM install.
26-
* Optionally quantize a model using the “Quantize an LLM to INT4 for Arm Platform” module. We’ll reference the output directory name from that step.
26+
Before you begin, make sure your environment is ready for evaluation.
27+
You should have:
28+
* Completed the optimized build from the “Overview and Optimized Build” section and successfully validated your vLLM installation.
29+
* (Optional) Quantized a model using the “Quantize an LLM to INT4 for Arm Platform” module.
30+
The quantized model directory (for example, DeepSeek-V2-Lite-w4a8dyn-mse-channelwise) will be used as input for INT4 evaluation.
31+
If you haven’t quantized a model, you can still evaluate your BF16 baseline to establish a reference accuracy.
2732

28-
## Install lm-eval-harness
33+
## Install LM Evaluation Harness
2934

30-
Install the harness with vLLM extras in your active Python environment:
35+
You will install the LM Evaluation Harness with vLLM backend support, allowing direct evaluation against your running vLLM server.
36+
37+
Install it inside your active Python environment:
3138

3239
```bash
3340
pip install "lm_eval[vllm]"
3441
pip install ray
3542
```
3643

3744
{{% notice Tip %}}
38-
If your benchmarks include gated models or datasets, run `huggingface-cli login` first so the harness can download what it needs.
45+
If your benchmarks include gated models or restricted datasets, run `huggingface-cli login`
46+
This ensures the harness can authenticate with Hugging Face and download any protected resources needed for evaluation.
3947
{{% /notice %}}
4048

41-
## Recommended runtime settings for Arm CPU
49+
## Recommended Runtime Settings for Arm CPU
4250

43-
Export the same performance-oriented environment variables used for serving. These enable Arm-optimized kernels through oneDNN+ACL and consistent thread pinning:
51+
Before running accuracy benchmarks, export the same performance tuned environment variables you used for serving.
52+
These settings ensure vLLM runs with Arm-optimized kernels (via oneDNN + Arm Compute Library) and consistent thread affinity across all CPU cores during evaluation.
4453

4554
```bash
4655
export VLLM_TARGET_DEVICE=cpu
@@ -52,13 +61,28 @@ export OMP_NUM_THREADS="$(nproc)"
5261
export LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libtcmalloc_minimal.so.4
5362
```
5463

64+
Explanation of settings
65+
66+
| Variable | Purpose |
67+
| --------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------- |
68+
| **`VLLM_TARGET_DEVICE=cpu`** | Forces vLLM to run entirely on CPU, ensuring evaluation results use Arm-optimized oneDNN kernels. |
69+
| **`VLLM_CPU_KVCACHE_SPACE=32`** | Reserves 32 GB for key/value caches used in attention. Adjust if evaluating with longer contexts or larger batches. |
70+
| **`VLLM_CPU_OMP_THREADS_BIND="0-$(($(nproc)-1))"`** | Pins OpenMP worker threads to physical cores (0–N-1) to minimize OS thread migration and improve cache locality. |
71+
| **`VLLM_MLA_DISABLE=1`** | Disables GPU/MLA probing for faster initialization in CPU-only mode. |
72+
| **`ONEDNN_DEFAULT_FPMATH_MODE=BF16`** | Enables **bfloat16** math mode, using reduced precision operations for faster compute while maintaining numerical stability. |
73+
| **`OMP_NUM_THREADS="$(nproc)"`** | Uses all available CPU cores to parallelize matrix multiplications and attention layers. |
74+
| **`LD_PRELOAD`** | Preloads **tcmalloc** (Thread-Caching Malloc) to reduce memory allocator contention under high concurrency. |
75+
5576
{{% notice Note %}}
56-
`LD_PRELOAD` uses tcmalloc to reduce allocator contention. Install it via `sudo apt-get install -y libtcmalloc-minimal4` if you haven’t already.
77+
tcmalloc helps reduce allocator overhead when running multiple evaluation tasks in parallel.
78+
If it’s not installed, add it with `sudo apt-get install -y libtcmalloc-minimal4`
5779
{{% /notice %}}
5880

59-
## Accuracy Benchmarking Meta‑Llama‑3.1‑8B‑Instruct BF16 model
81+
## Accuracy Benchmarking Meta‑Llama‑3.1‑8B‑Instruct (BF16 Model)
6082

61-
Run with a non-quantized model. Replace the model ID as needed.
83+
To establish a baseline accuracy reference, evaluate a non-quantized BF16 model served through vLLM.
84+
This run measures how the original model performs under Arm-optimized BF16 inference before applying INT4 quantization.
85+
Replace the model ID if you are using a different model variant or checkpoint.
6286

6387
```bash
6488
lm_eval \
@@ -69,12 +93,16 @@ lm_eval \
6993
--batch_size auto \
7094
--output_path results
7195
```
96+
After completing this test, review the results directory for accuracy metrics (e.g., acc_norm, acc) and record them as your BF16 baseline.
7297

73-
## Accuracy Benchmarking INT4 quantized model
98+
Next, you’ll run the same benchmarks on the INT4 quantized model to compare accuracy across precisions.
7499

75-
Use the INT4 quantization recipe & script from previous steps to quantize `meta-llama/Meta-Llama-3.1-8B-Instruct` model
100+
## Accuracy Benchmarking: INT4 quantized model
76101

77-
Channelwise INT4 (MSE):
102+
Now that you’ve quantized your model using the INT4 recipe and script from the previous module, you can benchmark its accuracy using the same evaluation harness and task set.
103+
This test compares quantized (INT4) performance against your BF16 baseline, revealing how much accuracy is preserved after compression.
104+
Use the quantized directory generated earlier, for example:
105+
Meta-Llama-3.1-8B-Instruct-w4a8dyn-mse-channelwise.
78106

79107
```bash
80108
lm_eval \
@@ -85,29 +113,48 @@ lm_eval \
85113
--batch_size auto \
86114
--output_path results
87115
```
116+
After this evaluation, compare the results metrics from both runs:
88117

89118
## Interpreting results
90119

91-
The harness prints per-task and aggregate scores (for example, `acc`, `acc_norm`, `exact_match`). Higher is generally better. Compare BF16 vs INT4 on the same tasks to assess quality impact.
120+
After running evaluations, the LM Evaluation Harness prints per-task and aggregate metrics such as acc, acc_norm, and exact_match.
121+
These represent model accuracy across various datasets and question formats—higher values indicate better performance.
122+
Key metrics include:
123+
* acc – Standard accuracy (fraction of correct predictions).
124+
* acc_norm – Normalized accuracy; adjusts for multiple-choice imbalance.
125+
* exact_match – Strict string-level match, typically used for reasoning or QA tasks.
92126

127+
Compare BF16 and INT4 results on identical tasks to assess the accuracy–efficiency trade-off introduced by quantization.
93128
Practical tips:
94-
* Use the same tasks and few-shot settings across runs.
95-
* For quick iteration, you can add `--limit 200` to run on a subset.
129+
* Always use identical tasks, few-shot settings, and seeds across runs to ensure fair comparisons.
130+
* Add --limit 200 for quick validation runs during tuning. This limits each task to 200 samples for faster iteration.
96131

97132
## Example results for Meta‑Llama‑3.1‑8B‑Instruct model
98133

99-
These illustrative results are representative; actual scores may vary across hardware, dataset versions, and harness releases. Higher values indicate better accuracy.
134+
The following results are illustrative and serve as reference points.
135+
Your actual scores may differ based on hardware, dataset version, or lm-eval-harness release.
100136

101137
| Variant | MMLU (acc±err) | HellaSwag (acc±err) |
102138
|---------------------------------|-------------------|---------------------|
103139
| BF16 | 0.5897 ± 0.0049 | 0.7916 ± 0.0041 |
104140
| INT4 Groupwise minmax (G=32) | 0.5831 ± 0.0049 | 0.7819 ± 0.0041 |
105141
| INT4 Channelwise MSE | 0.5712 ± 0.0049 | 0.7633 ± 0.0042 |
106142

107-
Use these as ballpark expectations to check whether your runs are in a reasonable range, not as official targets.
143+
How to interpret:
144+
145+
* BF16 baseline – Represents near-FP32 accuracy; serves as your quality reference.
146+
* INT4 Groupwise minmax – Retains almost all performance while reducing model size ~4× and improving throughput substantially.
147+
* INT4 Channelwise MSE – Slightly lower accuracy, often within 2–3 percentage points of BF16, still competitive for most production use cases.
108148

109149
## Next steps
110150

111-
* Try additional tasks to match your usecase: `gsm8k`, `winogrande`, `arc_easy`, `arc_challenge`.
112-
* Sweep quantization recipes (minmax vs mse; channelwise vs groupwise, group size) to find a better accuracy/performance balance.
151+
* Broaden accuracy testing to cover reasoning, math, and commonsense tasks that reflect your real-world use cases:
152+
GSM8K – Arithmetic and logical reasoning (sensitive to quantization).
153+
Winogrande – Commonsense and pronoun disambiguation.
154+
ARC-Easy / ARC-Challenge – Science and multi-step reasoning questions.
155+
Running multiple benchmarks gives a more comprehensive picture of model robustness under different workloads.
156+
157+
* Experiment with different quantization configurations to find the best accuracy–throughput trade-off for your hardware.
113158
* Record both throughput and accuracy to choose the best configuration for your workload.
159+
160+
By iterating on these steps, you will build a custom performance and accuracy profile for your Arm deployment, helping you select the optimal quantization strategy and runtime configuration for your target workload.

content/learning-paths/servers-and-cloud-computing/vllm-acceleration/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ prerequisites:
2222
- Python 3.12 and basic familiarity with Hugging Face Transformers and quantization.
2323

2424
author:
25-
- Nikhil Gupta
25+
- Nikhil Gupta, Pareena Verma
2626

2727
### Tags
2828
skilllevels: Introductory

0 commit comments

Comments
 (0)