You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-inference-onnx-automl-image-models.md
+10-10Lines changed: 10 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -145,7 +145,7 @@ env = Environment(
145
145
)
146
146
```
147
147
148
-
Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](how-to-auto-train-image-models.md#configure-experiments) and for supported object detection model names refer to the [supported model algorithm section](how-to-auto-train-image-models.md#supported-model-algorithms).
148
+
Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](reference-automl-images-hyperparameters.md#model-specific-hyperparameters) and for supported object detection model names refer to the [supported model algorithm section](how-to-auto-train-image-models.md#supported-model-algorithms).
149
149
150
150
To get the argument values needed to create the batch scoring model, refer to the scoring scripts generated under the outputs folder of the AutoML training runs. Use the hyperparameter values available in the model settings variable inside the scoring filefor the best child run.
151
151
@@ -334,7 +334,7 @@ Every ONNX model has a predefined set of input and output formats.
This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with134 images and4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass).
337
+
This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with134 images and4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-classification-multiclass-task-fridge-items).
338
338
339
339
### Input format
340
340
@@ -355,7 +355,7 @@ The output is an array of logits for all the classes/labels.
This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with128 images and4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel).
358
+
This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with128 images and4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-classification-multilabel-task-fridge-items).
359
359
360
360
### Input format
361
361
@@ -377,7 +377,7 @@ The output is an array of logits for all the classes/labels.
377
377
378
378
# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
379
379
380
-
This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
380
+
This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items).
381
381
382
382
### Input format
383
383
@@ -409,7 +409,7 @@ The following table describes boxes, labels and scores returned for each sample
409
409
410
410
# [Object detection with YOLO](#tab/object-detect-yolo)
411
411
412
-
This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
412
+
This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items).
413
413
414
414
### Input format
415
415
@@ -431,7 +431,7 @@ Each cell in the list indicates box detections of a sample with shape `(n_boxes,
For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with128 images and4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation).
434
+
For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with128 images and4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-instance-segmentation-task-fridge-items).
435
435
436
436
>[!IMPORTANT]
437
437
> Only Mask R-CNNis supported for instance segmentation tasks. The inputand output formats are based on Mask R-CNN only.
@@ -476,7 +476,7 @@ Perform the following preprocessing steps for the ONNX model inference:
476
476
5. Convert to floattype.
477
477
6. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
478
478
479
-
If you chose different values for the [hyperparameters](how-to-auto-train-image-models.md#configure-experiments) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
479
+
If you chose different values for the [hyperparameters](reference-automl-images-hyperparameters.md) `valid_resize_size`and`valid_crop_size` during training, then those values should be used.
480
480
481
481
Get the input shape needed for the ONNX model.
482
482
@@ -625,7 +625,7 @@ Perform the following preprocessing steps for the ONNX model inference. These st
625
625
5. Convert to floattype.
626
626
6. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
627
627
628
-
If you chose different values for the [hyperparameters](how-to-auto-train-image-models.md#configure-experiments) `valid_resize_size` and `valid_crop_size` during training, then those values should be used.
628
+
If you chose different values for the [hyperparameters](reference-automl-images-hyperparameters.md) `valid_resize_size`and`valid_crop_size` during training, then those values should be used.
For preprocessing required forYOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection).
851
+
For preprocessing required forYOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items).
852
852
853
853
```python
854
854
import glob
@@ -887,7 +887,7 @@ Perform the following preprocessing steps for the ONNX model inference:
887
887
4. Convert to floattype.
888
888
5. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
889
889
890
-
For `resize_height`and`resize_width`, you can also use the values that you used during training, bounded by the `min_size`and`max_size` [hyperparameters](how-to-auto-train-image-models.md#configure-experiments) for Mask R-CNN.
890
+
For `resize_height`and`resize_width`, you can also use the values that you used during training, bounded by the `min_size`and`max_size` [hyperparameters](reference-automl-images-hyperparameters.md) for Mask R-CNN.
Copy file name to clipboardExpand all lines: articles/machine-learning/v1/how-to-inference-onnx-automl-image-models-v1.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ In this guide, you'll learn how to use [Python APIs for ONNX Runtime](https://on
43
43
44
44
## Prerequisites
45
45
46
-
* Get an AutoML-trained computer vision model for any of the supported image tasks: classification, object detection, or instance segmentation. [Learn more about AutoML support for computer vision tasks](../how-to-auto-train-image-models.md).
46
+
* Get an AutoML-trained computer vision model for any of the supported image tasks: classification, object detection, or instance segmentation. [Learn more about AutoML support for computer vision tasks](how-to-auto-train-image-models-v1.md).
47
47
48
48
* Install the [onnxruntime](https://onnxruntime.ai/docs/get-started/with-python.html) package. The methods in this article have been tested with versions 1.3.0 to 1.8.0.
49
49
@@ -62,7 +62,7 @@ Within the best child run, go to **Outputs+logs** > **train_artifacts**. Use the
62
62
-*labels.json*: File that contains all the classes or labels in the training dataset.
63
63
-*model.onnx*: Model in ONNX format.
64
64
65
-

65
+

66
66
67
67
Save the downloaded model files in a directory. The example in this article uses the *./automl_models* directory.
Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](../how-to-auto-train-image-models.md#configure-experiments) and for supported object detection model names refer to the [supported model algorithm section](../how-to-auto-train-image-models.md#supported-model-algorithms).
127
+
Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](reference-automl-images-hyperparameters-v1.md#model-specific-hyperparameters) and for supported object detection model names refer to the [supported model algorithm section](how-to-auto-train-image-models-v1.md#supported-model-algorithms).
128
128
129
129
To get the argument values needed to create the batch scoring model, refer to the scoring scripts generated under the outputs folder of the AutoML training runs. Use the hyperparameter values available in the model settings variable inside the scoring file for the best child run.
130
130
@@ -216,7 +216,7 @@ onnx_model_path = 'automl_models/model.onnx' # local path to save the model
After the model downloading step, you use the ONNX Runtime Python package to perform inferencing by using the *model.onnx* file. For demonstration purposes, this article uses the datasets from [How to prepare image datasets](../how-to-prepare-datasets-for-automl-images.md) for each vision task.
219
+
After the model downloading step, you use the ONNX Runtime Python package to perform inferencing by using the *model.onnx* file. For demonstration purposes, this article uses the datasets from [How to prepare image datasets](how-to-prepare-datasets-for-automl-images-v1.md) for each vision task.
220
220
221
221
We've trained the models for all vision tasks with their respective datasets to demonstrate ONNX model inference.
222
222
@@ -411,7 +411,7 @@ Perform the following preprocessing steps for the ONNX model inference:
411
411
5. Convert to float type.
412
412
6. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
413
413
414
-
If you chose different values for the [hyperparameters](../how-to-auto-train-image-models.md#configure-experiments)`valid_resize_size` and `valid_crop_size` during training, then those values should be used.
414
+
If you chose different values for the [hyperparameters](reference-automl-images-hyperparameters-v1.md)`valid_resize_size` and `valid_crop_size` during training, then those values should be used.
415
415
416
416
Get the input shape needed for the ONNX model.
417
417
@@ -560,7 +560,7 @@ Perform the following preprocessing steps for the ONNX model inference. These st
560
560
5. Convert to float type.
561
561
6. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
562
562
563
-
If you chose different values for the [hyperparameters](../how-to-auto-train-image-models.md#configure-experiments)`valid_resize_size` and `valid_crop_size` during training, then those values should be used.
563
+
If you chose different values for the [hyperparameters](reference-automl-images-hyperparameters-v1.md)`valid_resize_size` and `valid_crop_size` during training, then those values should be used.
564
564
565
565
Get the input shape needed for the ONNX model.
566
566
@@ -822,7 +822,7 @@ Perform the following preprocessing steps for the ONNX model inference:
822
822
4. Convert to float type.
823
823
5. Normalize with ImageNet's `mean` = `[0.485, 0.456, 0.406]` and `std` = `[0.229, 0.224, 0.225]`.
824
824
825
-
For `resize_height` and `resize_width`, you can also use the values that you used during training, bounded by the `min_size` and `max_size`[hyperparameters](../how-to-auto-train-image-models.md#configure-experiments) for Mask R-CNN.
825
+
For `resize_height` and `resize_width`, you can also use the values that you used during training, bounded by the `min_size` and `max_size`[hyperparameters](reference-automl-images-hyperparameters-v1.md) for Mask R-CNN.
0 commit comments