Skip to content

Commit 69f6fbc

Browse files
authored
Update benchmarking.md
1 parent 7cf327c commit 69f6fbc

File tree

1 file changed

+34
-48
lines changed
  • content/learning-paths/servers-and-cloud-computing/onnx-on-azure

1 file changed

+34
-48
lines changed

content/learning-paths/servers-and-cloud-computing/onnx-on-azure/benchmarking.md

Lines changed: 34 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -6,59 +6,63 @@ weight: 6
66
layout: learningpathall
77
---
88

9-
Now that you’ve set up and run the ONNX model (e.g., SqueezeNet), you can use it to benchmark inference performance using Python-based timing or tools like **onnxruntime_perf_test**. This helps evaluate the ONNX Runtime efficiency on Azure Arm64-based Cobalt 100 instances.
10-
11-
You can also compare the inference time between Cobalt 100 (Arm64) and similar D-series x86_64-based virtual machine on Azure.
9+
Now that you have validated ONNX Runtime with Python-based timing (e.g., SqueezeNet baseline test), you can move to using a dedicated benchmarking utility called `onnxruntime_perf_test`. This tool is designed for systematic performance evaluation of ONNX models, allowing you to capture more detailed statistics than simple Python timing.
10+
This helps evaluate the ONNX Runtime efficiency on Azure Arm64-based Cobalt 100 instances and other x86_64 VM architectures.
1211

1312
## Run the performance tests using onnxruntime_perf_test
14-
The **onnxruntime_perf_test** is a performance benchmarking tool included in the ONNX Runtime source code. It is used to measure the inference performance of ONNX models under various runtime conditions (like CPU, GPU, or other execution providers).
13+
The `onnxruntime_perf_test` is a performance benchmarking tool included in the ONNX Runtime source code. It is used to measure the inference performance of ONNX models and supports multiple execution providers (like CPU, GPU, or other execution providers). on Arm64 VMs, CPU execution is the focus.
1514

1615
### Install Required Build Tools
16+
Before building or running `onnxruntime_perf_test`, you will need to install a set of development tools and libraries. These packages are required for compiling ONNX Runtime and handling model serialization via Protocol Buffers.
1717

1818
```console
1919
sudo apt update
2020
sudo apt install -y build-essential cmake git unzip pkg-config
2121
sudo apt install -y protobuf-compiler libprotobuf-dev libprotoc-dev git
2222
```
23-
Then verify:
23+
Then verify protobuf installation:
2424
```console
2525
protoc --version
2626
```
27-
You should see an output similar to:
27+
You should see output similar to:
2828

2929
```output
3030
libprotoc 3.21.12
3131
```
3232
### Build ONNX Runtime from Source:
3333

34-
The benchmarking tool, **onnxruntime_perf_test**, isn’t available as a pre-built binary artifact for any platform. So, you have to build it from the source, which is expected to take around 40-50 minutes.
34+
The benchmarking tool `onnxruntime_perf_test`, isn’t available as a pre-built binary for any platform. So, you will have to build it from the source, which is expected to take around 40 minutes.
3535

36-
Clone onnxruntime:
36+
Clone onnxruntime repo:
3737
```console
3838
git clone --recursive https://github.com/microsoft/onnxruntime
3939
cd onnxruntime
4040
```
41-
Now, build the benchmark as below:
41+
Now, build the benchmark tool:
4242

4343
```console
4444
./build.sh --config Release --build_dir build/Linux --build_shared_lib --parallel --build --update --skip_tests
4545
```
46-
This will build the benchmark tool inside ./build/Linux/Release/onnxruntime_perf_test.
46+
You should see the executable at:
47+
```output
48+
./build/Linux/Release/onnxruntime_perf_test
49+
```
4750

4851
### Run the benchmark
49-
Now that the benchmarking tool has been built, you can benchmark the **squeezenet-int8.onnx** model, as below:
52+
Now that you have built the benchmarking tool, you can run inference benchmarks on the SqueezeNet INT8 model:
5053

5154
```console
5255
./build/Linux/Release/onnxruntime_perf_test -e cpu -r 100 -m times -s -Z -I ../squeezenet-int8.onnx
5356
```
54-
- **e cpu**: Use the CPU execution provider (not GPU or any other backend).
55-
- **r 100**: Run 100 inferences.
56-
- **m times**: Use "repeat N times" mode.
57-
- **s**: Show detailed statistics.
58-
- **Z**: Disable intra-op thread spinning (reduces CPU usage when idle between runs).
59-
- **I**: Input the ONNX model path without using input/output test data.
57+
Breakdown of the flags:
58+
-e cpu → Use the CPU execution provider.
59+
-r 100 → Run 100 inference passes for statistical reliability.
60+
-m times → Run in “repeat N times” mode. Useful for latency-focused measurement.
61+
-s → Show detailed per-run statistics (latency distribution).
62+
-Z → Disable intra-op thread spinning. Reduces CPU waste when idle between runs, especially on high-core systems like Cobalt 100.
63+
-I → Input the ONNX model path directly, skipping pre-generated test data.
6064

61-
You should see an output similar to:
65+
You should see output similar to:
6266

6367
```output
6468
Disabling intra-op thread spinning between runs
@@ -84,12 +88,12 @@ P999 Latency: 0.00190312 s
8488
```
8589
### Benchmark Metrics Explained
8690

87-
- **Average Inference Time**: The mean time taken to process a single inference request across all runs. Lower values indicate faster model execution.
88-
- **Throughput**: The number of inference requests processed per second. Higher throughput reflects the model’s ability to handle larger workloads efficiently.
89-
- **CPU Utilization**: The percentage of CPU resources used during inference. A value close to 100% indicates full CPU usage, which is expected during performance benchmarking.
90-
- **Peak Memory Usage**: The maximum amount of system memory (RAM) consumed during inference. Lower memory usage is beneficial for resource-constrained environments.
91-
- **P50 Latency (Median Latency)**: The time below which 50% of inference requests complete. Represents typical latency under normal load.
92-
- **Latency Consistency**: Describes the stability of latency values across all runs. "Consistent" indicates predictable inference performance with minimal jitter.
91+
* Average Inference Time: The mean time taken to process a single inference request across all runs. Lower values indicate faster model execution.
92+
* Throughput: The number of inference requests processed per second. Higher throughput reflects the model’s ability to handle larger workloads efficiently.
93+
* CPU Utilization: The percentage of CPU resources used during inference. A value close to 100% indicates full CPU usage, which is expected during performance benchmarking.
94+
* Peak Memory Usage: The maximum amount of system memory (RAM) consumed during inference. Lower memory usage is beneficial for resource-constrained environments.
95+
* P50 Latency (Median Latency): The time below which 50% of inference requests complete. Represents typical latency under normal load.
96+
* Latency Consistency: Describes the stability of latency values across all runs. "Consistent" indicates predictable inference performance with minimal jitter.
9397

9498
### Benchmark summary on Arm64:
9599
Here is a summary of benchmark results collected on an Arm64 **D4ps_v6 Ubuntu Pro 24.04 LTS virtual machine**.
@@ -109,30 +113,12 @@ Here is a summary of benchmark results collected on an Arm64 **D4ps_v6 Ubuntu Pr
109113
| **Latency Consistency** | Consistent |
110114

111115

112-
### Benchmark summary on x86
113-
Here is a summary of benchmark results collected on x86 **D4s_v6 Ubuntu Pro 24.04 LTS virtual machine**.
114-
115-
| **Metric** | **Value on Virtual Machine** |
116-
|----------------------------|-------------------------------|
117-
| **Average Inference Time** | 1.413 ms |
118-
| **Throughput** | 707.48 inferences/sec |
119-
| **CPU Utilization** | 100% |
120-
| **Peak Memory Usage** | 38.80 MB |
121-
| **P50 Latency** | 1.396 ms |
122-
| **P90 Latency** | 1.501 ms |
123-
| **P95 Latency** | 1.520 ms |
124-
| **P99 Latency** | 1.794 ms |
125-
| **P999 Latency** | 1.794 ms |
126-
| **Max Latency** | 1.794 ms |
127-
| **Latency Consistency** | Consistent |
128-
129-
130-
### Highlights from Ubuntu Pro 24.04 Arm64 Benchmarking
116+
### Highlights from Benchmarking on Azure Cobalt 100 Arm64 VMs
131117

132-
When comparing the results on Arm64 vs x86_64 virtual machines:
133-
- **Low-Latency Inference:** Achieved consistent average inference times of ~1.86 ms on Arm64.
134-
- **Strong and Stable Throughput:** Sustained throughput of over 538 inferences/sec using the `squeezenet-int8.onnx` model on D4ps_v6 instances.
135-
- **Lightweight Resource Footprint:** Peak memory usage stayed below 37 MB, with CPU utilization around 96%, ideal for efficient edge or cloud inference.
136-
- **Consistent Performance:** P50, P95, and Max latency remained tightly bound, showcasing reliable performance on Azure Cobalt 100 Arm-based infrastructure.
118+
The results on Arm64 virtual machines demonstrate:
119+
- Low-Latency Inference: Achieved consistent average inference times of ~1.86 ms on Arm64.
120+
- Strong and Stable Throughput: Sustained throughput of over 538 inferences/sec using the `squeezenet-int8.onnx` model on D4ps_v6 instances.
121+
- Lightweight Resource Footprint: Peak memory usage stayed below 37 MB, with CPU utilization around 96%, ideal for efficient edge or cloud inference.
122+
- Consistent Performance: P50, P95, and Max latency remained tightly bound, showcasing reliable performance on Azure Cobalt 100 Arm-based infrastructure.
137123

138124
You have now benchmarked ONNX on an Azure Cobalt 100 Arm64 virtual machine and compared results with x86_64.

0 commit comments

Comments
 (0)