Skip to content

Commit 94bd7e9

Browse files
committed
draft
1 parent 4a95ae6 commit 94bd7e9

File tree

1 file changed

+35
-36
lines changed

1 file changed

+35
-36
lines changed
Lines changed: 35 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -1,62 +1,67 @@
11
---
2-
title: 'ONNX models: Optimize inference'
2+
title: ONNX models
33
titleSuffix: Azure Machine Learning
4-
description: Learn how using the Open Neural Network Exchange (ONNX) can help optimize the inference of your machine learning model.
4+
description: Learn how using the Open Neural Network Exchange (ONNX) can help optimize inference of your machine learning models.
55
services: machine-learning
66
ms.service: azure-machine-learning
77
ms.subservice: core
88
ms.topic: concept-article
99
ms.author: mopeakande
1010
author: msakande
1111
ms.reviewer: kritifaujdar
12-
ms.date: 03/18/2024
13-
14-
#customer intent: As a data scientist, I want learn how to use ONNX to create machine learning models and accelerate inferencing.
12+
ms.date: 09/27/2024
13+
#customer intent: As a data scientist, I want to learn about ONNX so I can use it to optimize the inference of my machine learning models.
1514
---
1615

1716
# ONNX and Azure Machine Learning
1817

19-
Learn how use of the [Open Neural Network Exchange](https://onnx.ai) (ONNX) can help to optimize the inference of your machine learning model. _Inference_ or _model scoring_, is the process of using a deployed model to generate predictions on production data.
18+
This article describes how the [Open Neural Network Exchange (ONNX)](https://onnx.ai) can help optimize the inference of your machine learning models. Inference or model scoring is the process of using a deployed model to generate predictions on production data.
2019

21-
Optimizing machine learning models for inference requires you to tune the model and the inference library to make the most of the hardware capabilities. This task becomes complex if you want to get optimal performance on different kinds of platforms such as cloud or edge, CPU or GPU, and so on, since each platform has different capabilities and characteristics. The complexity increases if you have models from various frameworks that need to run on different platforms. It can be time-consuming to optimize all the different combinations of frameworks and hardware. Therefore, a useful solution is to train your model once in your preferred framework and then run it anywhere on the cloud or edge—this solution is where ONNX comes in.
20+
Optimizing machine learning models for inference requires you to tune the model and the inference library to make the most of hardware capabilities. This task becomes complex if you want to get optimal performance on different kinds of platforms such as cloud, edge, CPU, or GPU, because each platform has different capabilities and characteristics. The complexity increases if you need to run models from various frameworks on different platforms. It can be time-consuming to optimize all the different combinations of frameworks and hardware.
2221

23-
## What is ONNX?
22+
A useful solution is to train your model one time in your preferred framework and then run it anywhere on the cloud or edge. ONNX can help with this solution.
2423

25-
Microsoft and a community of partners created ONNX as an open standard for representing machine learning models. Models from [many frameworks](https://onnx.ai/supported-tools) including TensorFlow, PyTorch, scikit-learn, Keras, Chainer, MXNet, and MATLAB can be exported or converted to the standard ONNX format. Once the models are in the ONNX format, they can be run on various platforms and devices.
24+
Microsoft and a community of partners created ONNX as an open standard for representing machine learning models. You can export or convert models from [many frameworks](https://onnx.ai/supported-tools), including TensorFlow, PyTorch, scikit-learn, Keras, Chainer, MXNet, and MATLAB, to the standard ONNX format. You can run models in the ONNX format on various platforms and devices.
2625

27-
[ONNX Runtime](https://onnxruntime.ai) is a high-performance inference engine for deploying ONNX models to production. It's optimized for both cloud and edge and works on Linux, Windows, and Mac. While ONNX is written in C++, it also has C, Python, C#, Java, and JavaScript (Node.js) APIs for usage in many environments. ONNX Runtime supports both deep neural networks (DNN) and traditional machine learning models, and it integrates with accelerators on different hardware such as TensorRT on Nvidia GPUs, OpenVINO on Intel processors, and DirectML on Windows. By using ONNX Runtime, you can benefit from the extensive production-grade optimizations, testing, and ongoing improvements.
26+
The following ONNX flow diagram shows available frameworks and deployment options.
2827

29-
ONNX Runtime is used in high-scale Microsoft services such as Bing, Office, and Azure AI. Although performance gains depend on many factors, these Microsoft services report an __average 2x performance gain on CPU__. In addition to Azure Machine Learning services, ONNX Runtime also runs in other products that support Machine Learning workloads, including:
28+
:::image type="content" source="media/concept-onnx/onnx.png" alt-text="ONNX flow diagram showing training, converters, and deployment." lightbox="media/concept-onnx/onnx.png":::
3029

31-
- __Windows__: The runtime is built into Windows as part of [Windows Machine Learning](/windows/ai/windows-ml/) and runs on hundreds of millions of devices.
32-
- __Azure SQL product family__: Run native scoring on data in [Azure SQL Edge](/azure/azure-sql-edge/onnx-overview) and [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/machine-learning-services-overview).
33-
- __ML.NET__: [Run ONNX models in ML.NET](/dotnet/machine-learning/tutorials/object-detection-onnx).
30+
## ONNX Runtime
3431

35-
:::image type="content" source="media/concept-onnx/onnx.png" alt-text="ONNX flow diagram showing training, converters, and deployment." lightbox="media/concept-onnx/onnx.png":::
32+
[ONNX Runtime](https://onnxruntime.ai) is a high-performance inference engine for deploying ONNX models to production. ONNX Runtime is optimized for both cloud and edge, and works on Linux, Windows, and MacOS. ONNX is written in C++, but also has C, Python, C#, Java, and JavaScript (Node.js) APIs for use in those environments.
33+
34+
ONNX Runtime supports both deep neural networks (DNN) and traditional machine learning models, and it integrates with accelerators on different hardware such as TensorRT on Nvidia GPUs, OpenVINO on Intel processors, and DirectML on Windows. By using ONNX Runtime, you can benefit from extensive production-grade optimizations, testing, and ongoing improvements.
35+
36+
High-scale Microsoft services such as Bing, Office, and Azure AI use ONNX Runtime. Although performance gains depend on many factors, these Microsoft services report an average 2x performance gain on CPU by using ONNX. ONNX Runtime runs in Azure Machine Learning and other Microsoft products that support machine learning workloads, including:
3637

37-
## How to obtain ONNX models
38+
- **Windows**. ONNX runtime is built into Windows as part of [Windows Machine Learning](/windows/ai/windows-ml/) and runs on hundreds of millions of devices.
39+
- **Azure SQL**. [Azure SQL Edge](/azure/azure-sql-edge/onnx-overview) and [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/machine-learning-services-overview) use ONNX to run native scoring on data.
40+
- **ML.NET**. For an example, see [Tutorial: Detect objects using ONNX in ML.NET](/dotnet/machine-learning/tutorials/object-detection-onnx).
41+
42+
## Ways to obtain ONNX models
3843

3944
You can obtain ONNX models in several ways:
4045

41-
- Train a new ONNX model in Azure Machine Learning (as described in the [examples](#examples) section of this article) or by using [automated machine learning capabilities](concept-automated-ml.md#automl--onnx).
42-
- Convert an existing model from another format to ONNX as shown in these [tutorials](https://github.com/onnx/tutorials).
46+
- Train a new ONNX model in Azure Machine Learning or use [automated machine learning capabilities](concept-automated-ml.md#automl--onnx).
47+
- Convert an existing model from another format to ONNX. For more information, see [ONNX Tutorials](https://github.com/onnx/tutorials).
4348
- Get a pretrained ONNX model from the [ONNX Model Zoo](https://github.com/onnx/models).
4449
- Generate a customized ONNX model from [Azure AI Custom Vision service](/azure/ai-services/custom-vision-service/).
4550

46-
Many models, including image classification, object detection, and text processing models can be represented as ONNX models. If you run into an issue with a model that can't be converted successfully, file a GitHub issue in the repository of the converter that you used. You can continue using your existing model format until the issue is addressed.
51+
You can represent many models as ONNX models, including image classification, object detection, and text processing models. If you can't convert your model successfully, file a GitHub issue in the repository of the converter you used.
4752

4853
## ONNX model deployment in Azure
4954

50-
With Azure Machine Learning, you can deploy, manage, and monitor your ONNX models. Using the standard [MLOps deployment workflow](concept-model-management-and-deployment.md) and ONNX Runtime, you can create a REST endpoint hosted in the cloud. For hands-on examples, see these [Jupyter notebooks](#examples).
55+
You can deploy, manage, and monitor your ONNX models with Azure Machine Learning. Use the standard [MLOps deployment workflow](concept-model-management-and-deployment.md) with ONNX Runtime to create a REST endpoint hosted in the cloud.
5156

52-
### Installation and use of ONNX Runtime with Python
57+
## Python packages for ONNX Runtime
5358

54-
Python packages for ONNX Runtime are available on [PyPi.org](https://pypi.org) ([CPU](https://pypi.org/project/onnxruntime) and [GPU](https://pypi.org/project/onnxruntime-gpu)). Be sure to review the [system requirements](https://github.com/Microsoft/onnxruntime#system-requirements) before installation.
59+
Python packages for [CPU](https://pypi.org/project/onnxruntime) and [GPU](https://pypi.org/project/onnxruntime-gpu) ONNX Runtime are available on [PyPi.org](https://pypi.org). Be sure to review system requirements before installation.
5560

5661
To install ONNX Runtime for Python, use one of the following commands:
5762

5863
```python
59-
pip install onnxruntime # CPU build
64+
pip install onnxruntime # CPU build
6065
pip install onnxruntime-gpu # GPU build
6166
```
6267

@@ -67,35 +72,29 @@ import onnxruntime
6772
session = onnxruntime.InferenceSession("path to model")
6873
```
6974

70-
The documentation accompanying the model usually tells you the inputs and outputs for using the model. You can also use a visualization tool such as [Netron](https://github.com/lutzroeder/Netron) to view the model. ONNX Runtime also lets you query the model metadata, inputs, and outputs as follows:
75+
The documentation accompanying the model usually tells you the inputs and outputs for using the model. You can also use a visualization tool such as [Netron](https://github.com/lutzroeder/Netron) to view the model.
76+
77+
ONNX Runtime lets you query the model metadata, inputs, and outputs as follows:
7178

7279
```python
7380
session.get_modelmeta()
7481
first_input_name = session.get_inputs()[0].name
7582
first_output_name = session.get_outputs()[0].name
7683
```
7784

78-
To perform inferencing on your model, use `run` and pass in the list of outputs you want returned (or leave the list empty if you want all of them) and a map of the input values. The result is a list of the outputs.
85+
To perform inferencing on your model, use `run` and pass in the list of outputs you want returned and a map of the input values. Leave the output list empty if you want all of the outputs. The result is a list of the outputs.
7986

8087
```python
8188
results = session.run(["output1", "output2"], {
8289
"input1": indata1, "input2": indata2})
8390
results = session.run([], {"input1": indata1, "input2": indata2})
8491
```
8592

86-
For the complete Python API reference, see the [ONNX Runtime reference docs](https://onnxruntime.ai/docs/api/python/api_summary.html).
87-
88-
## Examples
89-
90-
- [!INCLUDE [aml-clone-in-azure-notebook](includes/aml-clone-for-examples.md)]
91-
- For samples that show ONNX usage in other languages, see the [ONNX Runtime GitHub](https://github.com/microsoft/onnxruntime/tree/master/samples).
93+
For the complete ONNX Runtime API reference, see the [Python API documentation](https://onnxruntime.ai/docs/api/python/api_summary.html).
9294

9395
## Related content
9496

95-
Learn more about **ONNX** or contribute to the project:
9697
- [ONNX project website](https://onnx.ai)
97-
- [ONNX code on GitHub](https://github.com/onnx/onnx)
98-
99-
Learn more about **ONNX Runtime** or contribute to the project:
98+
- [ONNX GitHub repository](https://github.com/onnx/onnx)
10099
- [ONNX Runtime project website](https://onnxruntime.ai)
101-
- [ONNX Runtime GitHub Repo](https://github.com/Microsoft/onnxruntime)
100+
- [ONNX Runtime GitHub repository](https://github.com/Microsoft/onnxruntime)

0 commit comments

Comments
 (0)