Skip to content

Commit 71b521c

Browse files
Merge pull request #267730 from ssalgadodev/patch-75
Update how-to-inference-onnx-automl-image-models.md
2 parents e4c3997 + d3e9ba4 commit 71b521c

File tree

1 file changed

+15
-15
lines changed

1 file changed

+15
-15
lines changed

articles/machine-learning/how-to-inference-onnx-automl-image-models.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.reviewer: ssalgado
88
ms.service: machine-learning
99
ms.subservice: automl
1010
ms.topic: how-to
11-
ms.date: 10/18/2021
11+
ms.date: 02/18/2024
1212
ms.custom: sdkv2
1313
---
1414

@@ -17,7 +17,7 @@ ms.custom: sdkv2
1717
[!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)]
1818

1919

20-
In this article, you will learn how to use Open Neural Network Exchange (ONNX) to make predictions on computer vision models generated from automated machine learning (AutoML) in Azure Machine Learning.
20+
In this article, you'll learn how to use Open Neural Network Exchange (ONNX) to make predictions on computer vision models generated from automated machine learning (AutoML) in Azure Machine Learning.
2121

2222
To use ONNX for predictions, you need to:
2323

@@ -31,7 +31,7 @@ To use ONNX for predictions, you need to:
3131

3232
[ONNX Runtime](https://onnxruntime.ai/index.html) is an open-source project that supports cross-platform inference. ONNX Runtime provides APIs across programming languages (including Python, C++, C#, C, Java, and JavaScript). You can use these APIs to perform inference on input images. After you have the model that has been exported to ONNX format, you can use these APIs on any programming language that your project needs.
3333

34-
In this guide, you'll learn how to use [Python APIs for ONNX Runtime](https://onnxruntime.ai/docs/get-started/with-python.html) to make predictions on images for popular vision tasks. You can use these ONNX exported models across languages.
34+
In this guide, you learn how to use [Python APIs for ONNX Runtime](https://onnxruntime.ai/docs/get-started/with-python.html) to make predictions on images for popular vision tasks. You can use these ONNX exported models across languages.
3535

3636
## Prerequisites
3737

@@ -282,7 +282,7 @@ onnx_model_path = mlflow_client.download_artifacts(
282282

283283
After the model downloading step, you use the ONNX Runtime Python package to perform inferencing by using the *model.onnx* file. For demonstration purposes, this article uses the datasets from [How to prepare image datasets](how-to-prepare-datasets-for-automl-images.md) for each vision task.
284284

285-
We've trained the models for all vision tasks with their respective datasets to demonstrate ONNX model inference.
285+
We trained the models for all vision tasks with their respective datasets to demonstrate ONNX model inference.
286286

287287
## Load the labels and ONNX model files
288288

@@ -393,11 +393,11 @@ The output is a tuple of `output_names` and predictions. Here, `output_names` an
393393

394394
| Output name | Output shape | Output type | Description |
395395
| -------- |----------|-----|------|
396-
| `output_names` | `(3*batch_size)` | List of keys | For a batch size of 2, `output_names` will be `['boxes_0', 'labels_0', 'scores_0', 'boxes_1', 'labels_1', 'scores_1']` |
397-
| `predictions` | `(3*batch_size)` | List of ndarray(float) | For a batch size of 2, `predictions` will take the shape of `[(n1_boxes, 4), (n1_boxes), (n1_boxes), (n2_boxes, 4), (n2_boxes), (n2_boxes)]`. Here, values at each index correspond to same index in `output_names`. |
396+
| `output_names` | `(3*batch_size)` | List of keys | For a batch size of 2, `output_names` is `['boxes_0', 'labels_0', 'scores_0', 'boxes_1', 'labels_1', 'scores_1']` |
397+
| `predictions` | `(3*batch_size)` | List of ndarray(float) | For a batch size of 2, `predictions` takes the shape of `[(n1_boxes, 4), (n1_boxes), (n1_boxes), (n2_boxes, 4), (n2_boxes), (n2_boxes)]`. Here, values at each index correspond to same index in `output_names`. |
398398

399399

400-
The following table describes boxes, labels and scores returned for each sample in the batch of images.
400+
The following table describes boxes, labels, and scores returned for each sample in the batch of images.
401401

402402
| Name | Shape | Type | Description |
403403
| -------- |----------|-----|------|
@@ -419,7 +419,7 @@ The input is a preprocessed image, with the shape `(1, 3, 640, 640)` for a batch
419419
| Input | `(batch_size, num_channels, height, width)` | ndarray(float) | Input is a preprocessed image, with the shape `(1, 3, 640, 640)` for a batch size of 1, and a height of 640 and width of 640.|
420420
421421
### Output format
422-
ONNX model predictions contain multiple outputs. The first output is needed to perform non-max suppression for detections. For ease of use, automated ML displays the output format after the NMS postprocessing step. The output after NMS is a list of boxes, labels, and scores for each sample in the batch.
422+
ONNX model predictions contain multiple outputs. The first output is needed to perform nonmax suppression for detections. For ease of use, automated ML displays the output format after the NMS postprocessing step. The output after NMS is a list of boxes, labels, and scores for each sample in the batch.
423423

424424

425425
| Output name | Output shape | Output type | Description |
@@ -450,8 +450,8 @@ The output is a tuple of `output_names` and predictions. Here, `output_names` an
450450

451451
| Output name | Output shape | Output type | Description |
452452
| -------- |----------|-----|------|
453-
| `output_names` | `(4*batch_size)` | List of keys | For a batch size of 2, `output_names` will be `['boxes_0', 'labels_0', 'scores_0', 'masks_0', 'boxes_1', 'labels_1', 'scores_1', 'masks_1']` |
454-
| `predictions` | `(4*batch_size)` | List of ndarray(float) | For a batch size of 2, `predictions` will take the shape of `[(n1_boxes, 4), (n1_boxes), (n1_boxes), (n1_boxes, 1, height_onnx, width_onnx), (n2_boxes, 4), (n2_boxes), (n2_boxes), (n2_boxes, 1, height_onnx, width_onnx)]`. Here, values at each index correspond to same index in `output_names`. |
453+
| `output_names` | `(4*batch_size)` | List of keys | For a batch size of 2, `output_names` is `['boxes_0', 'labels_0', 'scores_0', 'masks_0', 'boxes_1', 'labels_1', 'scores_1', 'masks_1']` |
454+
| `predictions` | `(4*batch_size)` | List of ndarray(float) | For a batch size of 2, `predictions` takes the shape of `[(n1_boxes, 4), (n1_boxes), (n1_boxes), (n1_boxes, 1, height_onnx, width_onnx), (n2_boxes, 4), (n2_boxes), (n2_boxes), (n2_boxes, 1, height_onnx, width_onnx)]`. Here, values at each index correspond to same index in `output_names`. |
455455

456456
| Name | Shape | Type | Description |
457457
| -------- |----------|-----|------|
@@ -1261,7 +1261,7 @@ print(json.dumps(bounding_boxes_batch, indent=1))
12611261

12621262
# [Multi-class image classification](#tab/multi-class)
12631263

1264-
Visualize an input image with labels
1264+
Visualize an input image with labels.
12651265

12661266
```python
12671267
import matplotlib.image as mpimg
@@ -1301,7 +1301,7 @@ plt.show()
13011301

13021302
# [Multi-label image classification](#tab/multi-label)
13031303

1304-
Visualize an input image with labels
1304+
Visualize an input image with labels.
13051305

13061306
```python
13071307
import matplotlib.image as mpimg
@@ -1342,7 +1342,7 @@ plt.show()
13421342

13431343
# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
13441344

1345-
Visualize an input image with boxes and labels
1345+
Visualize an input image with boxes and labels.
13461346

13471347
```python
13481348
import matplotlib.image as mpimg
@@ -1383,7 +1383,7 @@ plt.show()
13831383

13841384
# [Object detection with YOLO](#tab/object-detect-yolo)
13851385

1386-
Visualize an input image with boxes and labels
1386+
Visualize an input image with boxes and labels.
13871387

13881388
```python
13891389
import matplotlib.image as mpimg
@@ -1424,7 +1424,7 @@ plt.show()
14241424

14251425
# [Instance segmentation](#tab/instance-segmentation)
14261426

1427-
Visualize a sample input image with masks and labels
1427+
Visualize a sample input image with masks and labels.
14281428

14291429
```python
14301430
import matplotlib.patches as patches

0 commit comments

Comments
 (0)