Skip to content

Commit 91d9a52

Browse files
First pass
1 parent 7a24f8d commit 91d9a52

File tree

6 files changed

+99
-47
lines changed

6 files changed

+99
-47
lines changed

content/learning-paths/servers-and-cloud-computing/onnx-on-azure/_index.md

Lines changed: 5 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,19 @@
11
---
22
title: Deploy SqueezeNet 1.0 INT8 model with ONNX Runtime on Azure Cobalt 100
33

4-
draft: true
5-
cascade:
6-
draft: true
7-
4+
85
minutes_to_complete: 60
96

10-
who_is_this_for: This Learning Path introduces ONNX deployment on Microsoft Azure Cobalt 100 (Arm-based) virtual machines. It is designed for developers deploying ONNX-based applications on Arm-based machines.
7+
who_is_this_for: This Learning Path is for developers deploying ONNX-based applications on Arm-based machines.
118

129
learning_objectives:
1310
- Provision an Azure Arm64 virtual machine using Azure console, with Ubuntu Pro 24.04 LTS as the base image.
14-
- Deploy ONNX on the Ubuntu Pro virtual machine.
1511
- Perform ONNX baseline testing and benchmarking on Arm64 virtual machines.
1612

1713
prerequisites:
18-
- A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6).
19-
- Basic understanding of Python and machine learning concepts.
20-
- Familiarity with [ONNX Runtime](https://onnxruntime.ai/docs/) and Azure cloud services.
14+
- A [Microsoft Azure](https://azure.microsoft.com/) account with access to Cobalt 100 based instances (Dpsv6)
15+
- Basic understanding of Python and machine learning concepts
16+
- Familiarity with [ONNX Runtime](https://onnxruntime.ai/docs/) and Azure cloud services
2117

2218
author: Pareena Verma
2319

content/learning-paths/servers-and-cloud-computing/onnx-on-azure/background.md

Lines changed: 23 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,14 +8,31 @@ layout: "learningpathall"
88

99
## Azure Cobalt 100 Arm-based processor
1010

11-
Azure’s Cobalt 100 is built on Microsoft's first-generation, in-house Arm-based processor: the Cobalt 100. Designed entirely by Microsoft and based on Arm’s Neoverse N2 architecture, this 64-bit CPU delivers improved performance and energy efficiency across a broad spectrum of cloud-native, scale-out Linux workloads. These include web and application servers, data analytics, open-source databases, caching systems, and more. Running at 3.4 GHz, the Cobalt 100 processor allocates a dedicated physical core for each vCPU, ensuring consistent and predictable performance.
1211

13-
To learn more about Cobalt 100, refer to the blog [Announcing the preview of new Azure virtual machine based on the Azure Cobalt 100 processor](https://techcommunity.microsoft.com/blog/azurecompute/announcing-the-preview-of-new-azure-vms-based-on-the-azure-cobalt-100-processor/4146353).
12+
Azure’s Cobalt 100 is built on Microsoft's first-generation, in-house Arm-based processor, the Cobalt 100. Designed entirely by Microsoft and based on Arm’s Neoverse N2 architecture, this 64-bit CPU delivers improved performance and energy efficiency across a broad spectrum of cloud-native, scale-out Linux workloads. You can use Cobalt 100 for:
13+
14+
- Web and application servers
15+
- Data analytics
16+
- Open-source databases
17+
- Caching systems
18+
- Many other scale-out workloads
19+
20+
Running at 3.4 GHz, the Cobalt 100 processor allocates a dedicated physical core for each vCPU, ensuring consistent and predictable performance.
21+
22+
You can learn more about Cobalt 100 in the blog [Announcing the preview of new Azure virtual machine based on the Azure Cobalt 100 processor](https://techcommunity.microsoft.com/blog/azurecompute/announcing-the-preview-of-new-azure-vms-based-on-the-azure-cobalt-100-processor/4146353).
1423

1524
## ONNX
16-
ONNX (Open Neural Network Exchange) is an open-source format designed for representing machine learning models.
17-
It provides interoperability between different deep learning frameworks, enabling models trained in one framework (such as PyTorch or TensorFlow) to be deployed and run in another.
1825

19-
ONNX models are serialized into a standardized format that can be executed by the ONNX Runtime, a high-performance inference engine optimized for CPU, GPU, and specialized hardware accelerators. This separation of model training and inference allows developers to build flexible, portable, and production-ready AI workflows.
26+
ONNX (Open Neural Network Exchange) is an open-source format designed for representing machine learning models. You can use ONNX to:
27+
28+
- Move models between different deep learning frameworks, such as PyTorch and TensorFlow
29+
- Deploy models trained in one framework to run in another
30+
- Build flexible, portable, and production-ready AI workflows
31+
32+
ONNX models are serialized into a standardized format that you can execute with ONNX Runtime—a high-performance inference engine optimized for CPU, GPU, and specialized hardware accelerators. This separation of model training and inference lets you deploy models efficiently across cloud, edge, and mobile environments.
33+
34+
To learn more, see the [ONNX official website](https://onnx.ai/) and the [ONNX Runtime documentation](https://onnxruntime.ai/docs/).
35+
36+
## Summary
2037

21-
ONNX is widely used in cloud, edge, and mobile environments to deliver efficient and scalable inference for deep learning models. Learn more from the [ONNX official website](https://onnx.ai/) and the [ONNX Runtime documentation](https://onnxruntime.ai/docs/).
38+
Now that you understand the basics of Azure Cobalt 100 and ONNX, you're ready to start deploying and benchmarking ONNX models on Arm-based Azure infrastructure. If this is your first time working with these technologies, don't worry, each step in this Learning Path is designed to help you succeed.

content/learning-paths/servers-and-cloud-computing/onnx-on-azure/baseline.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,3 +52,7 @@ Single inference latency(0.00260 sec): This is the time required for the model t
5252
Subsequent inferences are usually faster due to caching and optimized execution.
5353

5454
This demonstrates that the setup is fully working, and ONNX Runtime efficiently executes quantized models on Arm64.
55+
56+
Great job! You've completed your first ONNX Runtime inference on Arm-based Azure infrastructure. This baseline test confirms your environment is set up correctly and ready for more advanced benchmarking.
57+
58+
Next, you'll use a dedicated benchmarking tool to capture more detailed performance statistics and further optimize your deployment.

content/learning-paths/servers-and-cloud-computing/onnx-on-azure/benchmarking.md

Lines changed: 31 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -6,14 +6,20 @@ weight: 6
66
layout: learningpathall
77
---
88

9-
Now that you have validated ONNX Runtime with Python-based timing (e.g., SqueezeNet baseline test), you can move to using a dedicated benchmarking utility called `onnxruntime_perf_test`. This tool is designed for systematic performance evaluation of ONNX models, allowing you to capture more detailed statistics than simple Python timing.
10-
This helps evaluate the ONNX Runtime efficiency on Azure Arm64-based Cobalt 100 instances and other x86_64 instances. architectures.
9+
10+
Now that you have validated ONNX Runtime with Python-based timing (for example, the SqueezeNet baseline test), you can move to using a dedicated benchmarking utility called `onnxruntime_perf_test`. This tool is designed for systematic performance evaluation of ONNX models, allowing you to capture more detailed statistics than simple Python timing.
11+
12+
This approach helps you evaluate ONNX Runtime efficiency on Azure Arm64-based Cobalt 100 instances and compare results with other architectures if needed.
13+
14+
You are ready to run benchmarks — a key skill for optimizing real-world deployments.
15+
1116

1217
## Run the performance tests using onnxruntime_perf_test
13-
The `onnxruntime_perf_test` is a performance benchmarking tool included in the ONNX Runtime source code. It is used to measure the inference performance of ONNX models and supports multiple execution providers (like CPU, GPU, or other execution providers). on Arm64 VMs, CPU execution is the focus.
18+
The `onnxruntime_perf_test` tool is included in the ONNX Runtime source code. You can use it to measure the inference performance of ONNX models and compare different execution providers (such as CPU or GPU). On Arm64 VMs, CPU execution is the focus.
19+
1420

15-
### Install Required Build Tools
16-
Before building or running `onnxruntime_perf_test`, you will need to install a set of development tools and libraries. These packages are required for compiling ONNX Runtime and handling model serialization via Protocol Buffers.
21+
### Install required build tools
22+
Before building or running `onnxruntime_perf_test`, you need to install a set of development tools and libraries. These packages are required for compiling ONNX Runtime and handling model serialization via Protocol Buffers.
1723

1824
```console
1925
sudo apt update
@@ -29,35 +35,48 @@ You should see output similar to:
2935
```output
3036
libprotoc 3.21.12
3137
```
32-
### Build ONNX Runtime from Source:
38+
### Build ONNX Runtime from source
3339

34-
The benchmarking tool `onnxruntime_perf_test`, isn’t available as a pre-built binary for any platform. So, you will have to build it from the source, which is expected to take around 40 minutes.
40+
The benchmarking tool `onnxruntime_perf_test` isn’t available as a pre-built binary for any platform, so you will need to build it from source. This process can take up to 40 minutes.
3541

36-
Clone onnxruntime repo:
42+
Clone the ONNX Runtime repository:
3743
```console
3844
git clone --recursive https://github.com/microsoft/onnxruntime
3945
cd onnxruntime
4046
```
47+
4148
Now, build the benchmark tool:
4249

4350
```console
4451
./build.sh --config Release --build_dir build/Linux --build_shared_lib --parallel --build --update --skip_tests
4552
```
46-
You should see the executable at:
53+
If the build completes successfully, you should see the executable at:
4754
```output
4855
./build/Linux/Release/onnxruntime_perf_test
4956
```
5057

58+
5159
### Run the benchmark
5260
Now that you have built the benchmarking tool, you can run inference benchmarks on the SqueezeNet INT8 model:
5361

5462
```console
5563
./build/Linux/Release/onnxruntime_perf_test -e cpu -r 100 -m times -s -Z -I ../squeezenet-int8.onnx
5664
```
65+
5766
Breakdown of the flags:
58-
-e cpu → Use the CPU execution provider.
59-
-r 100 → Run 100 inference passes for statistical reliability.
60-
-m times → Run in “repeat N times” mode. Useful for latency-focused measurement.
67+
68+
- `-e cpu`: Use the CPU execution provider.
69+
- `-r 100`: Run 100 inference passes for statistical reliability.
70+
- `-m times`: Run in “repeat N times” mode for latency-focused measurement.
71+
- `-s`: Print summary statistics after the run.
72+
- `-Z`: Disable memory arena for more consistent timing.
73+
- `-I ../squeezenet-int8.onnx`: Path to your ONNX model file.
74+
75+
You should see output with latency and throughput statistics. If you encounter build errors, check that you have enough memory (at least 8 GB recommended) and all dependencies are installed. For missing dependencies, review the installation steps above.
76+
77+
If the benchmark runs successfully, you are ready to analyze and optimize your ONNX model performance on Arm-based Azure infrastructure.
78+
79+
Well done! You have completed a full benchmarking workflow. Continue to the next section to explore further optimizations or advanced deployment scenarios.
6180
-s → Show detailed per-run statistics (latency distribution).
6281
-Z → Disable intra-op thread spinning. Reduces CPU waste when idle between runs, especially on high-core systems like Cobalt 100.
6382
-I → Input the ONNX model path directly, skipping pre-generated test data.

content/learning-paths/servers-and-cloud-computing/onnx-on-azure/create-instance.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -28,29 +28,31 @@ Creating a virtual machine based on Azure Cobalt 100 is no different from creati
2828
3. Choose the image for your virtual machine (for example, Ubuntu Pro 24.04 LTS) and select “Arm64” as the VM architecture.
2929
4. In the “Size” field, click on “See all sizes” and select the D-Series v6 family of virtual machines. Select “D4ps_v6” from the list.
3030

31-
![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance.png "Figure 1: Select the D-Series v6 family of virtual machines")
31+
![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance.png "Select the D-Series v6 family of virtual machines")
3232

3333
5. Select "SSH public key" as an Authentication type. Azure will automatically generate an SSH key pair for you and allow you to store it for future use. It is a fast, simple, and secure way to connect to your virtual machine.
3434
6. Fill in the Administrator username for your VM.
3535
7. Select "Generate new key pair", and select "RSA SSH Format" as the SSH Key Type. RSA could offer better security with keys longer than 3072 bits. Give a Key pair name to your SSH key.
3636
8. In the "Inbound port rules", select HTTP (80) and SSH (22) as the inbound ports.
3737

38-
![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance1.png "Figure 2: Allow inbound port rules")
38+
![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance1.png "Allow inbound port rules")
3939

4040
9. Click on the "Review + Create" tab and review the configuration for your virtual machine. It should look like the following:
4141

42-
![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/ubuntu-pro.png "Figure 3: Review and Create an Azure Cobalt 100 Arm64 VM")
42+
![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/ubuntu-pro.png "Review and Create an Azure Cobalt 100 Arm64 VM")
4343

4444
10. Finally, when you are confident about your selection, click on the "Create" button, and click on the "Download Private key and Create Resources" button.
4545

46-
![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance4.png "Figure 4: Download Private key and Create Resources")
46+
![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/instance4.png "Download Private key and Create Resources")
4747

48-
11. Your virtual machine should be ready and running within no time. You can SSH into the virtual machine using the private key, along with the Public IP details.
48+
11. Your virtual machine should be ready and running within a few minutes. You can SSH into the virtual machine using the private key, along with the Public IP details.
4949

50-
![Azure portal VM creation — Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/final-vm.png "Figure 5: VM deployment confirmation in Azure portal")
50+
You should see your VM listed as "Running" in the Azure portal. If you have trouble connecting, double-check your SSH key and ensure the correct ports are open. If the VM creation fails, check your Azure quota, region availability, or try a different VM size.
5151

52-
{{% notice Note %}}
52+
Nice work! You have successfully provisioned an Arm-based Azure Cobalt 100 virtual machine. This environment is now ready for ONNX Runtime installation and benchmarking in the next steps.
5353

54-
To learn more about Arm-based virtual machine in Azure, refer to “Getting Started with Microsoft Azure” in [Get started with Arm-based cloud instances](/learning-paths/servers-and-cloud-computing/csp/azure).
54+
![Azure portal VM creation - Azure Cobalt 100 Arm64 virtual machine (D4ps_v6) alt-text#center](images/final-vm.png "VM deployment confirmation in Azure portal")
5555

56+
{{% notice Note %}}
57+
For further information or alternative setup options, see “Getting Started with Microsoft Azure” in [Get started with Arm-based cloud instances](/learning-paths/servers-and-cloud-computing/csp/azure).
5658
{{% /notice %}}

content/learning-paths/servers-and-cloud-computing/onnx-on-azure/deploy.md

Lines changed: 26 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -10,12 +10,14 @@ layout: learningpathall
1010
## ONNX Installation on Azure Ubuntu Pro 24.04 LTS
1111
To work with ONNX models on Azure, you will need a clean Python environment with the required packages. The following steps install Python, set up a virtual environment, and prepare for ONNX model execution using ONNX Runtime.
1212

13-
### Install Python and Virtual Environment:
13+
14+
### Install Python and virtual environment
1415

1516
```console
1617
sudo apt update
1718
sudo apt install -y python3 python3-pip python3-virtualenv python3-venv
1819
```
20+
1921
Create and activate a virtual environment:
2022

2123
```console
@@ -24,28 +26,35 @@ source onnx-env/bin/activate
2426
```
2527
{{% notice Note %}}Using a virtual environment isolates ONNX and its dependencies to avoid system conflicts.{{% /notice %}}
2628

27-
### Install ONNX and Required Libraries:
29+
Once your environment is active, you're ready to install the required libraries.
30+
31+
32+
### Install ONNX and required libraries
2833

2934
Upgrade pip and install ONNX with its runtime and supporting libraries:
3035
```console
3136
pip install --upgrade pip
3237
pip install onnx onnxruntime fastapi uvicorn numpy
3338
```
34-
This installs ONNX libraries along with FastAPI (web serving) and NumPy (for input tensor generation).
39+
This installs ONNX libraries, FastAPI (for web serving, if you want to deploy models as an API), Uvicorn (ASGI server for FastAPI), and NumPy (for input tensor generation).
40+
41+
If you encounter errors during installation, check your internet connection and ensure you are using the activated virtual environment. For missing dependencies, try updating pip or installing system packages as needed.
42+
43+
After installation, you're ready to validate your setup.
3544

36-
### Validate ONNX and ONNX Runtime:
37-
Once the libraries are installed, you should verify that both ONNX and ONNX Runtime are correctly set up on your VM.
45+
46+
### Validate ONNX and ONNX Runtime
47+
Once the libraries are installed, verify that both ONNX and ONNX Runtime are correctly set up on your VM.
3848

3949
Create a file named `version.py` with the following code:
4050
```python
4151
import onnx
4252
import onnxruntime
4353

44-
print("ONNX version:", onnx.__version__)
45-
print("ONNX Runtime version:", onnxruntime.__version__)
54+
print("ONNX version:", onnx.__version__)
55+
print("ONNX Runtime version:", onnxruntime.__version__)
4656
```
47-
Run the script:
48-
57+
Run the script:
4958
```console
5059
python3 version.py
5160
```
@@ -54,10 +63,15 @@ You should see output similar to:
5463
ONNX version: 1.19.0
5564
ONNX Runtime version: 1.23.0
5665
```
57-
With this validation, you have confirmed that ONNX and ONNX Runtime are installed and ready on your Azure Cobalt 100 VM. This is the foundation for running inference workloads and serving ONNX models.
66+
If you see version numbers for both ONNX and ONNX Runtime, your environment is ready. If you get an ImportError, double-check that your virtual environment is activated and the libraries are installed.
67+
68+
Great job! You have confirmed that ONNX and ONNX Runtime are installed and ready on your Azure Cobalt 100 VM. This is the foundation for running inference workloads and serving ONNX models.
69+
70+
71+
### Download and validate ONNX model: SqueezeNet
72+
SqueezeNet is a lightweight convolutional neural network (CNN) architecture designed to provide accuracy close to AlexNet while using 50x fewer parameters and a much smaller model size. This makes it well-suited for benchmarking ONNX Runtime.
5873

59-
### Download and Validate ONNX Model - SqueezeNet:
60-
SqueezeNet is a lightweight convolutional neural network (CNN) architecture designed to provide accuracy close to AlexNet while using 50x fewer parameters and a much smaller model size. This makes it well-suited for benchmarking ONNX Runtime.
74+
Now that your environment is set up and validated, you're ready to download and test the SqueezeNet model in the next step.
6175
Download the quantized model:
6276
```console
6377
wget https://github.com/onnx/models/raw/main/validated/vision/classification/squeezenet/model/squeezenet1.0-12-int8.onnx -O squeezenet-int8.onnx

0 commit comments

Comments
 (0)