You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/onnx-on-azure/_index.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,22 +7,22 @@ cascade:
7
7
8
8
minutes_to_complete: 60
9
9
10
-
who_is_this_for: This Learning Path introduces ONNX deployment on Microsoft Azure Cobalt 100 (Arm-based) virtual machines. It is designed for developers migrating ONNX-based applications from x86_64 to Arm with minimal or no changes.
10
+
who_is_this_for: This Learning Path introduces ONNX deployment on Microsoft Azure Cobalt 100 (Arm-based) virtual machines. It is designed for developers deploying ONNX-based applications on Arm-based machines.
11
11
12
12
learning_objectives:
13
13
- Provision an Azure Arm64 virtual machine using Azure console, with Ubuntu Pro 24.04 LTS as the base image.
14
14
- Deploy ONNX on the Ubuntu Pro virtual machine.
15
-
- Perform ONNX baseline testing and benchmarking on both x86_64 and Arm64 virtual machines.
15
+
- Perform ONNX baseline testing and benchmarking on Arm64 virtual machines.
16
16
17
17
prerequisites:
18
18
- A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6).
19
19
- Basic understanding of Python and machine learning concepts.
20
20
- Familiarity with [ONNX Runtime](https://onnxruntime.ai/docs/) and Azure cloud services.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/onnx-on-azure/background.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ weight: 2
6
6
layout: "learningpathall"
7
7
---
8
8
9
-
## Cobalt 100 Arm-based processor
9
+
## Azure Cobalt 100 Arm-based processor
10
10
11
11
Azure’s Cobalt 100 is built on Microsoft's first-generation, in-house Arm-based processor: the Cobalt 100. Designed entirely by Microsoft and based on Arm’s Neoverse N2 architecture, this 64-bit CPU delivers improved performance and energy efficiency across a broad spectrum of cloud-native, scale-out Linux workloads. These include web and application servers, data analytics, open-source databases, caching systems, and more. Running at 3.4 GHz, the Cobalt 100 processor allocates a dedicated physical core for each vCPU, ensuring consistent and predictable performance.
12
12
@@ -16,6 +16,6 @@ To learn more about Cobalt 100, refer to the blog [Announcing the preview of new
16
16
ONNX (Open Neural Network Exchange) is an open-source format designed for representing machine learning models.
17
17
It provides interoperability between different deep learning frameworks, enabling models trained in one framework (such as PyTorch or TensorFlow) to be deployed and run in another.
18
18
19
-
ONNX models are serialized into a standardized format that can be executed by the **ONNX Runtime**, a high-performance inference engine optimized for CPU, GPU, and specialized hardware accelerators. This separation of model training and inference allows developers to build flexible, portable, and production-ready AI workflows.
19
+
ONNX models are serialized into a standardized format that can be executed by the ONNX Runtime, a high-performance inference engine optimized for CPU, GPU, and specialized hardware accelerators. This separation of model training and inference allows developers to build flexible, portable, and production-ready AI workflows.
20
20
21
21
ONNX is widely used in cloud, edge, and mobile environments to deliver efficient and scalable inference for deep learning models. Learn more from the [ONNX official website](https://onnx.ai/) and the [ONNX Runtime documentation](https://onnxruntime.ai/docs/).
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/onnx-on-azure/baseline.md
+11-9Lines changed: 11 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,12 +7,11 @@ layout: learningpathall
7
7
---
8
8
9
9
10
-
## Baseline testing using ONNX Runtime:
10
+
## Baseline Testing using ONNX Runtime:
11
11
12
-
This test measures the inference latency of the ONNX Runtime by timing how long it takes to process a single input using the `squeezenet-int8.onnx model`. It helps evaluate how efficiently the model runs on the target hardware.
13
-
14
-
Create a **baseline.py** file with the below code for baseline test of ONNX:
12
+
The purpose of this test is to measure the inference latency of ONNX Runtime on your Azure Cobalt 100 VM. By timing how long it takes to process a single input through the SqueezeNet INT8 model, you can validate that ONNX Runtime is functioning correctly and get a baseline performance measurement for your target hardware.
15
13
14
+
Create a file named `baseline.py` with the following code:
16
15
```python
17
16
import onnxruntime as ort
18
17
import numpy as np
@@ -29,12 +28,12 @@ end = time.time()
29
28
print("Inference time:", end - start)
30
29
```
31
30
32
-
Run the baseline test:
31
+
Run the baseline script to measure inference time:
- 224 x 224: image resolution (common for models like SqueezeNet)
46
45
{{% /notice %}}
47
46
47
+
This indicates the model successfully executed a single forward pass through the SqueezeNet INT8 ONNX model and returned results.
48
+
48
49
#### Output summary:
49
50
50
-
- Single inference latency: ~2.60 milliseconds (0.00260 sec)
51
-
- This shows the initial (cold-start) inference performance of ONNX Runtime on CPU using an optimized int8 quantized model.
52
-
- This demonstrates that the setup is fully working, and ONNX Runtime efficiently executes quantized models on Arm64.
51
+
Single inference latency(0.00260 sec): This is the time required for the model to process one input image and produce an output. The first run includes graph loading, memory allocation, and model initialization overhead.
52
+
Subsequent inferences are usually faster due to caching and optimized execution.
53
+
54
+
This demonstrates that the setup is fully working, and ONNX Runtime efficiently executes quantized models on Arm64.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/onnx-on-azure/benchmarking.md
+36-50Lines changed: 36 additions & 50 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,59 +6,63 @@ weight: 6
6
6
layout: learningpathall
7
7
---
8
8
9
-
Now that you’ve set up and run the ONNX model (e.g., SqueezeNet), you can use it to benchmark inference performance using Python-based timing or tools like **onnxruntime_perf_test**. This helps evaluate the ONNX Runtime efficiency on Azure Arm64-based Cobalt 100 instances.
10
-
11
-
You can also compare the inference time between Cobalt 100 (Arm64) and similar D-series x86_64-based virtual machine on Azure.
9
+
Now that you have validated ONNX Runtime with Python-based timing (e.g., SqueezeNet baseline test), you can move to using a dedicated benchmarking utility called `onnxruntime_perf_test`. This tool is designed for systematic performance evaluation of ONNX models, allowing you to capture more detailed statistics than simple Python timing.
10
+
This helps evaluate the ONNX Runtime efficiency on Azure Arm64-based Cobalt 100 instances and other x86_64 instances. architectures.
12
11
13
12
## Run the performance tests using onnxruntime_perf_test
14
-
The **onnxruntime_perf_test** is a performance benchmarking tool included in the ONNX Runtime source code. It is used to measure the inference performance of ONNX models under various runtime conditions (like CPU, GPU, or other execution providers).
13
+
The `onnxruntime_perf_test` is a performance benchmarking tool included in the ONNX Runtime source code. It is used to measure the inference performance of ONNX models and supports multiple execution providers (like CPU, GPU, or other execution providers). on Arm64 VMs, CPU execution is the focus.
15
14
16
15
### Install Required Build Tools
16
+
Before building or running `onnxruntime_perf_test`, you will need to install a set of development tools and libraries. These packages are required for compiling ONNX Runtime and handling model serialization via Protocol Buffers.
The benchmarking tool, **onnxruntime_perf_test**, isn’t available as a pre-built binary artifact for any platform. So, you have to build it from the source, which is expected to take around 40-50 minutes.
34
+
The benchmarking tool`onnxruntime_perf_test`, isn’t available as a pre-built binary for any platform. So, you will have to build it from the source, which is expected to take around 40 minutes.
This will build the benchmark tool inside ./build/Linux/Release/onnxruntime_perf_test.
46
+
You should see the executable at:
47
+
```output
48
+
./build/Linux/Release/onnxruntime_perf_test
49
+
```
47
50
48
51
### Run the benchmark
49
-
Now that the benchmarking tool has been built, you can benchmark the **squeezenet-int8.onnx** model, as below:
52
+
Now that you have built the benchmarking tool, you can run inference benchmarks on the SqueezeNet INT8 model:
50
53
51
54
```console
52
-
./build/Linux/Release/onnxruntime_perf_test -e cpu -r 100 -m times -s -Z -I <path-to-squeezenet-int8.onnx>
55
+
./build/Linux/Release/onnxruntime_perf_test -e cpu -r 100 -m times -s -Z -I ../squeezenet-int8.onnx
53
56
```
54
-
-**e cpu**: Use the CPU execution provider (not GPU or any other backend).
55
-
-**r 100**: Run 100 inferences.
56
-
-**m times**: Use "repeat N times" mode.
57
-
-**s**: Show detailed statistics.
58
-
-**Z**: Disable intra-op thread spinning (reduces CPU usage when idle between runs).
59
-
-**I**: Input the ONNX model path without using input/output test data.
57
+
Breakdown of the flags:
58
+
-e cpu → Use the CPU execution provider.
59
+
-r 100 → Run 100 inference passes for statistical reliability.
60
+
-m times → Run in “repeat N times” mode. Useful for latency-focused measurement.
61
+
-s → Show detailed per-run statistics (latency distribution).
62
+
-Z → Disable intra-op thread spinning. Reduces CPU waste when idle between runs, especially on high-core systems like Cobalt 100.
63
+
-I → Input the ONNX model path directly, skipping pre-generated test data.
60
64
61
-
You should see an output similar to:
65
+
You should see output similar to:
62
66
63
67
```output
64
68
Disabling intra-op thread spinning between runs
@@ -84,12 +88,12 @@ P999 Latency: 0.00190312 s
84
88
```
85
89
### Benchmark Metrics Explained
86
90
87
-
-**Average Inference Time**: The mean time taken to process a single inference request across all runs. Lower values indicate faster model execution.
88
-
-**Throughput**: The number of inference requests processed per second. Higher throughput reflects the model’s ability to handle larger workloads efficiently.
89
-
-**CPU Utilization**: The percentage of CPU resources used during inference. A value close to 100% indicates full CPU usage, which is expected during performance benchmarking.
90
-
-**Peak Memory Usage**: The maximum amount of system memory (RAM) consumed during inference. Lower memory usage is beneficial for resource-constrained environments.
91
-
-**P50 Latency (Median Latency)**: The time below which 50% of inference requests complete. Represents typical latency under normal load.
92
-
-**Latency Consistency**: Describes the stability of latency values across all runs. "Consistent" indicates predictable inference performance with minimal jitter.
91
+
*Average Inference Time: The mean time taken to process a single inference request across all runs. Lower values indicate faster model execution.
92
+
*Throughput: The number of inference requests processed per second. Higher throughput reflects the model’s ability to handle larger workloads efficiently.
93
+
*CPU Utilization: The percentage of CPU resources used during inference. A value close to 100% indicates full CPU usage, which is expected during performance benchmarking.
94
+
*Peak Memory Usage: The maximum amount of system memory (RAM) consumed during inference. Lower memory usage is beneficial for resource-constrained environments.
95
+
*P50 Latency (Median Latency): The time below which 50% of inference requests complete. Represents typical latency under normal load.
96
+
*Latency Consistency: Describes the stability of latency values across all runs. "Consistent" indicates predictable inference performance with minimal jitter.
93
97
94
98
### Benchmark summary on Arm64:
95
99
Here is a summary of benchmark results collected on an Arm64 **D4ps_v6 Ubuntu Pro 24.04 LTS virtual machine**.
@@ -109,30 +113,12 @@ Here is a summary of benchmark results collected on an Arm64 **D4ps_v6 Ubuntu Pr
109
113
|**Latency Consistency**| Consistent |
110
114
111
115
112
-
### Benchmark summary on x86
113
-
Here is a summary of benchmark results collected on x86 **D4s_v6 Ubuntu Pro 24.04 LTS virtual machine**.
### Highlights from Ubuntu Pro 24.04 Arm64 Benchmarking
116
+
### Highlights from Benchmarking on Azure Cobalt 100 Arm64 VMs
131
117
132
-
When comparing the results on Arm64 vs x86_64 virtual machines:
133
-
-**Low-Latency Inference:** Achieved consistent average inference times of ~1.86 ms on Arm64.
134
-
-**Strong and Stable Throughput:** Sustained throughput of over 538 inferences/sec using the `squeezenet-int8.onnx` model on D4ps_v6 instances.
135
-
-**Lightweight Resource Footprint:** Peak memory usage stayed below 37 MB, with CPU utilization around 96%, ideal for efficient edge or cloud inference.
136
-
-**Consistent Performance:** P50, P95, and Max latency remained tightly bound, showcasing reliable performance on Azure Cobalt 100 Arm-based infrastructure.
118
+
The results on Arm64 virtual machines demonstrate:
119
+
- Low-Latency Inference: Achieved consistent average inference times of ~1.86 ms on Arm64.
120
+
- Strong and Stable Throughput: Sustained throughput of over 538 inferences/sec using the `squeezenet-int8.onnx` model on D4ps_v6 instances.
121
+
- Lightweight Resource Footprint: Peak memory usage stayed below 37 MB, with CPU utilization around 96%, ideal for efficient edge or cloud inference.
122
+
- Consistent Performance: P50, P95, and Max latency remained tightly bound, showcasing reliable performance on Azure Cobalt 100 Arm-based infrastructure.
137
123
138
-
You have now benchmarked ONNX on an Azure Cobalt 100 Arm64 virtual machine and compared results with x86_64.
124
+
You have now successfully benchmarked inference time of ONNX models on an Azure Cobalt 100 Arm64 virtual machine.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/onnx-on-azure/create-instance.md
+11-5Lines changed: 11 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,18 +1,24 @@
1
1
---
2
-
title: Create an Armbased cloud virtual machine using Microsoft Cobalt 100 CPU
2
+
title: Create an Arm-based Azure VM with Cobalt 100
3
3
weight: 3
4
4
5
5
### FIXED, DO NOT MODIFY
6
6
layout: learningpathall
7
7
---
8
8
9
-
## Introduction
9
+
## Set up your development environment
10
10
11
-
There are several ways to create an Arm-based Cobalt 100 virtual machine : the Microsoft Azure console, the Azure CLI tool, or using your choice of IaC (Infrastructure as Code). This guide will use the Azure console to create a virtual machine with Arm-based Cobalt 100 Processor.
11
+
There is more than one way to create an Arm-based Cobalt 100 virtual machine:
12
12
13
-
This learning path focuses on the general-purpose virtual machine of the D series. Please read the guide on [Dpsv6 size series](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/general-purpose/dpsv6-series) offered by Microsoft Azure.
13
+
- The Microsoft Azure portal
14
+
- The Azure CLI
15
+
- Your preferred infrastructure as code (IaC) tool
14
16
15
-
If you have never used the Microsoft Cloud Platform before, please review the microsoft [guide to Create a Linux virtual machine in the Azure portal](https://learn.microsoft.com/en-us/azure/virtual-machines/linux/quick-create-portal?tabs=ubuntu).
17
+
In this Learning Path, you will use the Azure portal to create a virtual machine with the Arm-based Azure Cobalt 100 processor.
18
+
19
+
You will focus on the general-purpose virtual machines in the D-series. For further information, see the Microsoft Azure guide for the [Dpsv6 size series](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/general-purpose/dpsv6-series).
20
+
21
+
While the steps to create this instance are included here for convenience, for further information on setting up Cobalt on Azure, see [Deploy a Cobalt 100 virtual machine on Azure Learning Path](/learning-paths/servers-and-cloud-computing/cobalt/).
0 commit comments