Skip to content

Commit 1ef34fa

Browse files
Merge pull request #222921 from vadthyavath/rvadthyavath/onnx_doc_updates
Rvadthyavath/onnx doc updates
2 parents 55f4411 + 6c83567 commit 1ef34fa

File tree

2 files changed

+18
-18
lines changed

2 files changed

+18
-18
lines changed

articles/machine-learning/how-to-inference-onnx-automl-image-models.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@ env = Environment(
145145
)
146146
```
147147

148-
Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](how-to-auto-train-image-models.md#configure-experiments) and for supported object detection model names refer to the [supported model algorithm section](how-to-auto-train-image-models.md#supported-model-algorithms).
148+
Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](reference-automl-images-hyperparameters.md#model-specific-hyperparameters) and for supported object detection model names refer to the [supported model algorithm section](how-to-auto-train-image-models.md#supported-model-algorithms).
149149

150150
To get the argument values needed to create the batch scoring model, refer to the scoring scripts generated under the outputs folder of the AutoML training runs. Use the hyperparameter values available in the model settings variable inside the scoring file for the best child run.
151151

@@ -334,7 +334,7 @@ Every ONNX model has a predefined set of input and output formats.
334334

335335
# [Multi-class image classification](#tab/multi-class)
336336

337-
This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass).
337+
This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-classification-multiclass-task-fridge-items).
338338

339339
### Input format
340340

@@ -355,7 +355,7 @@ The output is an array of logits for all the classes/labels.
355355

356356
# [Multi-label image classification](#tab/multi-label)
357357

358-
This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel).
358+
This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-classification-multilabel-task-fridge-items).
359359

360360
### Input format
361361

@@ -377,7 +377,7 @@ The output is an array of logits for all the classes/labels.
377377

378378
# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
379379

380-
This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
380+
This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items).
381381

382382
### Input format
383383

@@ -409,7 +409,7 @@ The following table describes boxes, labels and scores returned for each sample
409409

410410
# [Object detection with YOLO](#tab/object-detect-yolo)
411411

412-
This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
412+
This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items).
413413

414414
### Input format
415415

@@ -431,7 +431,7 @@ Each cell in the list indicates box detections of a sample with shape `(n_boxes,
431431

432432
# [Instance segmentation](#tab/instance-segmentation)
433433

434-
For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation).
434+
For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-instance-segmentation-task-fridge-items).
435435

436436
>[!IMPORTANT]
437437
> Only Mask R-CNN is supported for instance segmentation tasks. The input and output formats are based on Mask R-CNN only.
@@ -476,7 +476,7 @@ Perform the following preprocessing steps for the ONNX model inference:
476476
5. Convert to float type.
477477
6. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
478478

479-
If you chose different values for the [hyperparameters](how-to-auto-train-image-models.md#configure-experiments) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
479+
If you chose different values for the [hyperparameters](reference-automl-images-hyperparameters.md) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
480480

481481
Get the input shape needed for the ONNX model.
482482

@@ -625,7 +625,7 @@ Perform the following preprocessing steps for the ONNX model inference. These st
625625
5. Convert to float type.
626626
6. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
627627

628-
If you chose different values for the [hyperparameters](how-to-auto-train-image-models.md#configure-experiments) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
628+
If you chose different values for the [hyperparameters](reference-automl-images-hyperparameters.md) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
629629

630630
Get the input shape needed for the ONNX model.
631631

@@ -848,7 +848,7 @@ batch, channel, height_onnx, width_onnx = session.get_inputs()[0].shape
848848
batch, channel, height_onnx, width_onnx
849849
```
850850

851-
For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
851+
For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items).
852852

853853
```python
854854
import glob
@@ -887,7 +887,7 @@ Perform the following preprocessing steps for the ONNX model inference:
887887
4. Convert to float type.
888888
5. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
889889

890-
For `resize_height` and `resize_width`, you can also use the values that you used during training, bounded by the `min_size` and `max_size` [hyperparameters](how-to-auto-train-image-models.md#configure-experiments) for Mask R-CNN.
890+
For `resize_height` and `resize_width`, you can also use the values that you used during training, bounded by the `min_size` and `max_size` [hyperparameters](reference-automl-images-hyperparameters.md) for Mask R-CNN.
891891

892892
```python
893893
import glob

articles/machine-learning/v1/how-to-inference-onnx-automl-image-models-v1.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ In this guide, you'll learn how to use [Python APIs for ONNX Runtime](https://on
4343

4444
## Prerequisites
4545

46-
* Get an AutoML-trained computer vision model for any of the supported image tasks: classification, object detection, or instance segmentation. [Learn more about AutoML support for computer vision tasks](../how-to-auto-train-image-models.md).
46+
* Get an AutoML-trained computer vision model for any of the supported image tasks: classification, object detection, or instance segmentation. [Learn more about AutoML support for computer vision tasks](how-to-auto-train-image-models-v1.md).
4747

4848
* Install the [onnxruntime](https://onnxruntime.ai/docs/get-started/with-python.html) package. The methods in this article have been tested with versions 1.3.0 to 1.8.0.
4949

@@ -62,7 +62,7 @@ Within the best child run, go to **Outputs+logs** > **train_artifacts**. Use the
6262
- *labels.json*: File that contains all the classes or labels in the training dataset.
6363
- *model.onnx*: Model in ONNX format.
6464

65-
![Screenshot that shows selections for downloading O N N X model files.](.././media/how-to-inference-onnx-automl-image-models/onnx-files-manual-download.png)
65+
![Screenshot that shows selections for downloading ONNX model files.](.././media/how-to-inference-onnx-automl-image-models/onnx-files-manual-download.png)
6666

6767
Save the downloaded model files in a directory. The example in this article uses the *./automl_models* directory.
6868

@@ -124,7 +124,7 @@ automl_image_run = AutoMLRun(experiment=experiment, run_id=run_id)
124124
best_child_run = automl_image_run.get_best_child()
125125
```
126126

127-
Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](../how-to-auto-train-image-models.md#configure-experiments) and for supported object detection model names refer to the [supported model algorithm section](../how-to-auto-train-image-models.md#supported-model-algorithms).
127+
Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](reference-automl-images-hyperparameters-v1.md#model-specific-hyperparameters) and for supported object detection model names refer to the [supported model algorithm section](how-to-auto-train-image-models-v1.md#supported-model-algorithms).
128128

129129
To get the argument values needed to create the batch scoring model, refer to the scoring scripts generated under the outputs folder of the AutoML training runs. Use the hyperparameter values available in the model settings variable inside the scoring file for the best child run.
130130

@@ -216,7 +216,7 @@ onnx_model_path = 'automl_models/model.onnx' # local path to save the model
216216
remote_run.download_file(name='outputs/model_'+str(batch_size)+'.onnx', output_file_path=onnx_model_path)
217217
```
218218

219-
After the model downloading step, you use the ONNX Runtime Python package to perform inferencing by using the *model.onnx* file. For demonstration purposes, this article uses the datasets from [How to prepare image datasets](../how-to-prepare-datasets-for-automl-images.md) for each vision task.
219+
After the model downloading step, you use the ONNX Runtime Python package to perform inferencing by using the *model.onnx* file. For demonstration purposes, this article uses the datasets from [How to prepare image datasets](how-to-prepare-datasets-for-automl-images-v1.md) for each vision task.
220220

221221
We've trained the models for all vision tasks with their respective datasets to demonstrate ONNX model inference.
222222

@@ -411,7 +411,7 @@ Perform the following preprocessing steps for the ONNX model inference:
411411
5. Convert to float type.
412412
6. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
413413

414-
If you chose different values for the [hyperparameters](../how-to-auto-train-image-models.md#configure-experiments) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
414+
If you chose different values for the [hyperparameters](reference-automl-images-hyperparameters-v1.md) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
415415

416416
Get the input shape needed for the ONNX model.
417417

@@ -560,7 +560,7 @@ Perform the following preprocessing steps for the ONNX model inference. These st
560560
5. Convert to float type.
561561
6. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
562562

563-
If you chose different values for the [hyperparameters](../how-to-auto-train-image-models.md#configure-experiments) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
563+
If you chose different values for the [hyperparameters](reference-automl-images-hyperparameters-v1.md) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
564564

565565
Get the input shape needed for the ONNX model.
566566

@@ -822,7 +822,7 @@ Perform the following preprocessing steps for the ONNX model inference:
822822
4. Convert to float type.
823823
5. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
824824

825-
For `resize_height` and `resize_width`, you can also use the values that you used during training, bounded by the `min_size` and `max_size` [hyperparameters](../how-to-auto-train-image-models.md#configure-experiments) for Mask R-CNN.
825+
For `resize_height` and `resize_width`, you can also use the values that you used during training, bounded by the `min_size` and `max_size` [hyperparameters](reference-automl-images-hyperparameters-v1.md) for Mask R-CNN.
826826

827827
```python
828828
import glob
@@ -1437,5 +1437,5 @@ display_detections(img, boxes.copy(), labels, scores, masks.copy(),
14371437
---
14381438

14391439
## Next steps
1440-
* [Learn more about computer vision tasks in AutoML](../how-to-auto-train-image-models.md)
1440+
* [Learn more about computer vision tasks in AutoML](how-to-auto-train-image-models-v1.md)
14411441
* [Troubleshoot AutoML experiments](../how-to-troubleshoot-auto-ml.md)

0 commit comments

Comments
 (0)