Skip to content

Commit aa6ec87

Browse files
authored
Merge pull request #179036 from nibaccam/patch-1
Limitation warning and notices
2 parents 6d9e4c6 + d0fdcdf commit aa6ec87

File tree

1 file changed

+9
-1
lines changed

1 file changed

+9
-1
lines changed

articles/machine-learning/how-to-inference-onnx-automl-image-models.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -223,6 +223,9 @@ The output is a list of boxes, labels, and scores. For YOLO, you need the first
223223

224224
For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/81c7d33ed82f62f419472bc11f7e1bad448ff15b/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation).
225225

226+
>[!IMPORTANT]
227+
> Only Mask R-CNN is supported for instance segmentation tasks. The input and output formats are based on Mask R-CNN only.
228+
226229
### Input format
227230

228231
The input is a preprocessed image. The ONNX model for Mask R-CNN has been exported to work with images of different shapes. We recommend that you resize them to a fixed size that's consistent with training image sizes, for better performance.
@@ -483,7 +486,7 @@ img_data = preprocess(img, resize_size, crop_size_onnx)
483486

484487
# [Object detection with Faster R-CNN](#tab/object-detect-cnn)
485488

486-
For object detection with the Faster R-CNN algorithm, follow the same preprocessing steps as image classification, except for image cropping. You can resize the image with height `600` and width `800`, and get the expected input height and width with the following code.
489+
For object detection with the Faster R-CNN algorithm, follow the same preprocessing steps as image classification, except for image cropping. You can resize the image with height `600` and width `800`. You can get the expected input height and width with the following code.
487490

488491
```python
489492
batch, channel, height_onnx, width_onnx = session.get_inputs()[0].shape
@@ -551,6 +554,8 @@ img_data, pad = preprocess(test_image_path)
551554
```
552555

553556
# [Instance segmentation](#tab/instance-segmentation)
557+
>[!IMPORTANT]
558+
> Only Mask R-CNN is supported for instance segmentation tasks. The preprocessing steps are based on Mask R-CNN only.
554559
555560
Perform the following preprocessing steps for the ONNX model inference:
556561

@@ -609,6 +614,9 @@ img_data = preprocess(img, resize_height, resize_width)
609614

610615
Inferencing with ONNX Runtime differs for each computer vision task.
611616

617+
>[!WARNING]
618+
> Batch scoring is not currently supported for all computer vision tasks.
619+
612620
# [Multi-class image classification](#tab/multi-class)
613621

614622
```python

0 commit comments

Comments
 (0)