You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/onnx-on-azure/_index.md
+5-9Lines changed: 5 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,23 +1,19 @@
1
1
---
2
2
title: Deploy SqueezeNet 1.0 INT8 model with ONNX Runtime on Azure Cobalt 100
3
3
4
-
draft: true
5
-
cascade:
6
-
draft: true
7
-
4
+
8
5
minutes_to_complete: 60
9
6
10
-
who_is_this_for: This Learning Path introduces ONNX deployment on Microsoft Azure Cobalt 100 (Arm-based) virtual machines. It is designed for developers deploying ONNX-based applications on Arm-based machines.
7
+
who_is_this_for: This Learning Path is for developers deploying ONNX-based applications on Arm-based machines.
11
8
12
9
learning_objectives:
13
10
- Provision an Azure Arm64 virtual machine using Azure console, with Ubuntu Pro 24.04 LTS as the base image.
14
-
- Deploy ONNX on the Ubuntu Pro virtual machine.
15
11
- Perform ONNX baseline testing and benchmarking on Arm64 virtual machines.
16
12
17
13
prerequisites:
18
-
- A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6).
19
-
- Basic understanding of Python and machine learning concepts.
20
-
- Familiarity with [ONNX Runtime](https://onnxruntime.ai/docs/) and Azure cloud services.
14
+
- A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6)
15
+
- Basic understanding of Python and machine learning concepts
16
+
- Familiarity with [ONNX Runtime](https://onnxruntime.ai/docs/) and Azure cloud services
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/onnx-on-azure/background.md
+23-6Lines changed: 23 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,14 +8,31 @@ layout: "learningpathall"
8
8
9
9
## Azure Cobalt 100 Arm-based processor
10
10
11
-
Azure’s Cobalt 100 is built on Microsoft's first-generation, in-house Arm-based processor: the Cobalt 100. Designed entirely by Microsoft and based on Arm’s Neoverse N2 architecture, this 64-bit CPU delivers improved performance and energy efficiency across a broad spectrum of cloud-native, scale-out Linux workloads. These include web and application servers, data analytics, open-source databases, caching systems, and more. Running at 3.4 GHz, the Cobalt 100 processor allocates a dedicated physical core for each vCPU, ensuring consistent and predictable performance.
12
11
13
-
To learn more about Cobalt 100, refer to the blog [Announcing the preview of new Azure virtual machine based on the Azure Cobalt 100 processor](https://techcommunity.microsoft.com/blog/azurecompute/announcing-the-preview-of-new-azure-vms-based-on-the-azure-cobalt-100-processor/4146353).
12
+
Azure’s Cobalt 100 is built on Microsoft's first-generation, in-house Arm-based processor, the Cobalt 100. Designed entirely by Microsoft and based on Arm’s Neoverse N2 architecture, this 64-bit CPU delivers improved performance and energy efficiency across a broad spectrum of cloud-native, scale-out Linux workloads. You can use Cobalt 100 for:
13
+
14
+
- Web and application servers
15
+
- Data analytics
16
+
- Open-source databases
17
+
- Caching systems
18
+
- Many other scale-out workloads
19
+
20
+
Running at 3.4 GHz, the Cobalt 100 processor allocates a dedicated physical core for each vCPU, ensuring consistent and predictable performance.
21
+
22
+
You can learn more about Cobalt 100 in the blog [Announcing the preview of new Azure virtual machine based on the Azure Cobalt 100 processor](https://techcommunity.microsoft.com/blog/azurecompute/announcing-the-preview-of-new-azure-vms-based-on-the-azure-cobalt-100-processor/4146353).
14
23
15
24
## ONNX
16
-
ONNX (Open Neural Network Exchange) is an open-source format designed for representing machine learning models.
17
-
It provides interoperability between different deep learning frameworks, enabling models trained in one framework (such as PyTorch or TensorFlow) to be deployed and run in another.
18
25
19
-
ONNX models are serialized into a standardized format that can be executed by the ONNX Runtime, a high-performance inference engine optimized for CPU, GPU, and specialized hardware accelerators. This separation of model training and inference allows developers to build flexible, portable, and production-ready AI workflows.
26
+
ONNX (Open Neural Network Exchange) is an open-source format designed for representing machine learning models. You can use ONNX to:
27
+
28
+
- Move models between different deep learning frameworks, such as PyTorch and TensorFlow
29
+
- Deploy models trained in one framework to run in another
30
+
- Build flexible, portable, and production-ready AI workflows
31
+
32
+
ONNX models are serialized into a standardized format that you can execute with ONNX Runtime—a high-performance inference engine optimized for CPU, GPU, and specialized hardware accelerators. This separation of model training and inference lets you deploy models efficiently across cloud, edge, and mobile environments.
33
+
34
+
To learn more, see the [ONNX official website](https://onnx.ai/) and the [ONNX Runtime documentation](https://onnxruntime.ai/docs/).
35
+
36
+
## Summary
20
37
21
-
ONNX is widely used in cloud, edge, and mobile environments to deliver efficient and scalable inference for deep learning models. Learn more from the [ONNX official website](https://onnx.ai/) and the [ONNX Runtime documentation](https://onnxruntime.ai/docs/).
38
+
Now that you understand the basics of Azure Cobalt 100 and ONNX, you're ready to start deploying and benchmarking ONNX models on Arm-based Azure infrastructure. If this is your first time working with these technologies, don't worry, each step in this Learning Path is designed to help you succeed.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/onnx-on-azure/baseline.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,3 +52,7 @@ Single inference latency(0.00260 sec): This is the time required for the model t
52
52
Subsequent inferences are usually faster due to caching and optimized execution.
53
53
54
54
This demonstrates that the setup is fully working, and ONNX Runtime efficiently executes quantized models on Arm64.
55
+
56
+
Great job! You've completed your first ONNX Runtime inference on Arm-based Azure infrastructure. This baseline test confirms your environment is set up correctly and ready for more advanced benchmarking.
57
+
58
+
Next, you'll use a dedicated benchmarking tool to capture more detailed performance statistics and further optimize your deployment.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/onnx-on-azure/benchmarking.md
+31-12Lines changed: 31 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,14 +6,20 @@ weight: 6
6
6
layout: learningpathall
7
7
---
8
8
9
-
Now that you have validated ONNX Runtime with Python-based timing (e.g., SqueezeNet baseline test), you can move to using a dedicated benchmarking utility called `onnxruntime_perf_test`. This tool is designed for systematic performance evaluation of ONNX models, allowing you to capture more detailed statistics than simple Python timing.
10
-
This helps evaluate the ONNX Runtime efficiency on Azure Arm64-based Cobalt 100 instances and other x86_64 instances. architectures.
9
+
10
+
Now that you have validated ONNX Runtime with Python-based timing (for example, the SqueezeNet baseline test), you can move to using a dedicated benchmarking utility called `onnxruntime_perf_test`. This tool is designed for systematic performance evaluation of ONNX models, allowing you to capture more detailed statistics than simple Python timing.
11
+
12
+
This approach helps you evaluate ONNX Runtime efficiency on Azure Arm64-based Cobalt 100 instances and compare results with other architectures if needed.
13
+
14
+
You are ready to run benchmarks — a key skill for optimizing real-world deployments.
15
+
11
16
12
17
## Run the performance tests using onnxruntime_perf_test
13
-
The `onnxruntime_perf_test` is a performance benchmarking tool included in the ONNX Runtime source code. It is used to measure the inference performance of ONNX models and supports multiple execution providers (like CPU, GPU, or other execution providers). on Arm64 VMs, CPU execution is the focus.
18
+
The `onnxruntime_perf_test` tool is included in the ONNX Runtime source code. You can use it to measure the inference performance of ONNX models and compare different execution providers (such as CPU or GPU). On Arm64 VMs, CPU execution is the focus.
19
+
14
20
15
-
### Install Required Build Tools
16
-
Before building or running `onnxruntime_perf_test`, you will need to install a set of development tools and libraries. These packages are required for compiling ONNX Runtime and handling model serialization via Protocol Buffers.
21
+
### Install required build tools
22
+
Before building or running `onnxruntime_perf_test`, you need to install a set of development tools and libraries. These packages are required for compiling ONNX Runtime and handling model serialization via Protocol Buffers.
17
23
18
24
```console
19
25
sudo apt update
@@ -29,35 +35,48 @@ You should see output similar to:
29
35
```output
30
36
libprotoc 3.21.12
31
37
```
32
-
### Build ONNX Runtime from Source:
38
+
### Build ONNX Runtime from source
33
39
34
-
The benchmarking tool `onnxruntime_perf_test`, isn’t available as a pre-built binary for any platform. So, you will have to build it from the source, which is expected to take around 40 minutes.
40
+
The benchmarking tool `onnxruntime_perf_test` isn’t available as a pre-built binary for any platform, so you will need to build it from source. This process can take up to 40 minutes.
If the build completes successfully, you should see the executable at:
47
54
```output
48
55
./build/Linux/Release/onnxruntime_perf_test
49
56
```
50
57
58
+
51
59
### Run the benchmark
52
60
Now that you have built the benchmarking tool, you can run inference benchmarks on the SqueezeNet INT8 model:
53
61
54
62
```console
55
63
./build/Linux/Release/onnxruntime_perf_test -e cpu -r 100 -m times -s -Z -I ../squeezenet-int8.onnx
56
64
```
65
+
57
66
Breakdown of the flags:
58
-
-e cpu → Use the CPU execution provider.
59
-
-r 100 → Run 100 inference passes for statistical reliability.
60
-
-m times → Run in “repeat N times” mode. Useful for latency-focused measurement.
67
+
68
+
-`-e cpu`: Use the CPU execution provider.
69
+
-`-r 100`: Run 100 inference passes for statistical reliability.
70
+
-`-m times`: Run in “repeat N times” mode for latency-focused measurement.
71
+
-`-s`: Print summary statistics after the run.
72
+
-`-Z`: Disable memory arena for more consistent timing.
73
+
-`-I ../squeezenet-int8.onnx`: Path to your ONNX model file.
74
+
75
+
You should see output with latency and throughput statistics. If you encounter build errors, check that you have enough memory (at least 8 GB recommended) and all dependencies are installed. For missing dependencies, review the installation steps above.
76
+
77
+
If the benchmark runs successfully, you are ready to analyze and optimize your ONNX model performance on Arm-based Azure infrastructure.
78
+
79
+
Well done! You have completed a full benchmarking workflow. Continue to the next section to explore further optimizations or advanced deployment scenarios.
61
80
-s → Show detailed per-run statistics (latency distribution).
62
81
-Z → Disable intra-op thread spinning. Reduces CPU waste when idle between runs, especially on high-core systems like Cobalt 100.
63
82
-I → Input the ONNX model path directly, skipping pre-generated test data.
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/onnx-on-azure/create-instance.md
+10-8Lines changed: 10 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,29 +28,31 @@ Creating a virtual machine based on Azure Cobalt 100 is no different from creati
28
28
3. Choose the image for your virtual machine (for example, Ubuntu Pro 24.04 LTS) and select “Arm64” as the VM architecture.
29
29
4. In the “Size” field, click on “See all sizes” and select the D-Series v6 family of virtual machines. Select “D4ps_v6” from the list.
30
30
31
-

31
+

32
32
33
33
5. Select "SSH public key" as an Authentication type. Azure will automatically generate an SSH key pair for you and allow you to store it for future use. It is a fast, simple, and secure way to connect to your virtual machine.
34
34
6. Fill in the Administrator username for your VM.
35
35
7. Select "Generate new key pair", and select "RSA SSH Format" as the SSH Key Type. RSA could offer better security with keys longer than 3072 bits. Give a Key pair name to your SSH key.
36
36
8. In the "Inbound port rules", select HTTP (80) and SSH (22) as the inbound ports.
37
37
38
-

38
+

39
39
40
40
9. Click on the "Review + Create" tab and review the configuration for your virtual machine. It should look like the following:
41
41
42
-

42
+

43
43
44
44
10. Finally, when you are confident about your selection, click on the "Create" button, and click on the "Download Private key and Create Resources" button.
45
45
46
-

46
+

47
47
48
-
11. Your virtual machine should be ready and running within no time. You can SSH into the virtual machine using the private key, along with the Public IP details.
48
+
11. Your virtual machine should be ready and running within a few minutes. You can SSH into the virtual machine using the private key, along with the Public IP details.
49
49
50
-

50
+
You should see your VM listed as "Running" in the Azure portal. If you have trouble connecting, double-check your SSH key and ensure the correct ports are open. If the VM creation fails, check your Azure quota, region availability, or try a different VM size.
51
51
52
-
{{% notice Note %}}
52
+
Nice work! You have successfully provisioned an Arm-based Azure Cobalt 100 virtual machine. This environment is now ready for ONNX Runtime installation and benchmarking in the next steps.
53
53
54
-
To learn more about Arm-based virtual machine in Azure, refer to “Getting Started with Microsoft Azure” in [Get started with Arm-based cloud instances](/learning-paths/servers-and-cloud-computing/csp/azure).
54
+

55
55
56
+
{{% notice Note %}}
57
+
For further information or alternative setup options, see “Getting Started with Microsoft Azure” in [Get started with Arm-based cloud instances](/learning-paths/servers-and-cloud-computing/csp/azure).
Copy file name to clipboardExpand all lines: content/learning-paths/servers-and-cloud-computing/onnx-on-azure/deploy.md
+26-12Lines changed: 26 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,12 +10,14 @@ layout: learningpathall
10
10
## ONNX Installation on Azure Ubuntu Pro 24.04 LTS
11
11
To work with ONNX models on Azure, you will need a clean Python environment with the required packages. The following steps install Python, set up a virtual environment, and prepare for ONNX model execution using ONNX Runtime.
This installs ONNX libraries along with FastAPI (web serving) and NumPy (for input tensor generation).
39
+
This installs ONNX libraries, FastAPI (for web serving, if you want to deploy models as an API), Uvicorn (ASGI server for FastAPI), and NumPy (for input tensor generation).
40
+
41
+
If you encounter errors during installation, check your internet connection and ensure you are using the activated virtual environment. For missing dependencies, try updating pip or installing system packages as needed.
42
+
43
+
After installation, you're ready to validate your setup.
35
44
36
-
### Validate ONNX and ONNX Runtime:
37
-
Once the libraries are installed, you should verify that both ONNX and ONNX Runtime are correctly set up on your VM.
45
+
46
+
### Validate ONNX and ONNX Runtime
47
+
Once the libraries are installed, verify that both ONNX and ONNX Runtime are correctly set up on your VM.
38
48
39
49
Create a file named `version.py` with the following code:
@@ -54,10 +63,15 @@ You should see output similar to:
54
63
ONNX version: 1.19.0
55
64
ONNX Runtime version: 1.23.0
56
65
```
57
-
With this validation, you have confirmed that ONNX and ONNX Runtime are installed and ready on your Azure Cobalt 100 VM. This is the foundation for running inference workloads and serving ONNX models.
66
+
If you see version numbers for both ONNX and ONNX Runtime, your environment is ready. If you get an ImportError, double-check that your virtual environment is activated and the libraries are installed.
67
+
68
+
Great job! You have confirmed that ONNX and ONNX Runtime are installed and ready on your Azure Cobalt 100 VM. This is the foundation for running inference workloads and serving ONNX models.
69
+
70
+
71
+
### Download and validate ONNX model: SqueezeNet
72
+
SqueezeNet is a lightweight convolutional neural network (CNN) architecture designed to provide accuracy close to AlexNet while using 50x fewer parameters and a much smaller model size. This makes it well-suited for benchmarking ONNX Runtime.
58
73
59
-
### Download and Validate ONNX Model - SqueezeNet:
60
-
SqueezeNet is a lightweight convolutional neural network (CNN) architecture designed to provide accuracy close to AlexNet while using 50x fewer parameters and a much smaller model size. This makes it well-suited for benchmarking ONNX Runtime.
74
+
Now that your environment is set up and validated, you're ready to download and test the SqueezeNet model in the next step.
0 commit comments