You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/custom-vision-service/custom-vision-onnx-windows-ml.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,7 +41,7 @@ The example application is available at the [Azure AI services ONNX Custom Visio
41
41
42
42
To use your own image classifier model, follow these steps:
43
43
44
-
1. Create and train a classifier with the Custom Vision Service. For instructions on how to do this, see [Create and train a classifier](./getting-started-build-a-classifier.md). Use one of the **compact** domains such as **General (compact)**.
44
+
1. Create and train a classifier with the Custom Vision Service. For instructions on how to do this, see [Create and train a classifier](./getting-started-build-a-classifier.md). Use one of the **compact** domains such as **General (compact)**.
45
45
* If you have an existing classifier that uses a different domain, you can convert it to **compact** in the project settings. Then, re-train your project before continuing.
46
46
1. Export your model. Switch to the Performance tab and select an iteration that was trained with a **compact** domain. Select the **Export** button that appears. Then select **ONNX**, and then **Export**. Once the file is ready, select the **Download** button. For more information on export options, see [Export your model](./export-your-model.md).
47
47
1. Open the downloaded *.zip* file and extract the *model.onnx* file from it. This file contains your classifier model.
@@ -60,6 +60,5 @@ To discover other ways to export and use a Custom Vision model, see the followin
60
60
*[Export your model](./export-your-model.md)
61
61
*[Use exported TensorFlow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample)
62
62
*[Use exported CoreML model in a Swift iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
63
-
*[Use exported CoreML model in an iOS application with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)
64
63
65
64
For more information on using ONNX models with Windows ML, see [Integrate a model into your app with Windows ML](/windows/ai/windows-ml/integrate-model).
Copy file name to clipboardExpand all lines: articles/ai-services/custom-vision-service/export-model-python.md
+14-15Lines changed: 14 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ This guide shows you how to use an [exported TensorFlow model](./export-your-mod
33
33
pip install numpy
34
34
pip install opencv-python
35
35
```
36
-
36
+
37
37
## Load your model and tags
38
38
39
39
The downloaded _.zip_ file from the export step contains a _model.pb_ and a _labels.txt_ file. These files represent the trained model and the classification labels. The first step is to load the model into your project. Add the following code to a new Python script.
@@ -65,19 +65,19 @@ with open(labels_filename, 'rt') as lf:
65
65
There are a few steps you need to take to prepare an image for prediction. These steps mimic the image manipulation performed during training.
66
66
67
67
1. Open the file and create an image in the BGR color space
68
-
68
+
69
69
```Python
70
70
from PIL import Image
71
71
import numpy as np
72
72
import cv2
73
-
73
+
74
74
# Load from a file
75
75
imageFile = "<path to your image file>"
76
76
image = Image.open(imageFile)
77
-
77
+
78
78
# Update orientation based on EXIF tags, if the file has orientation info.
79
79
image = update_orientation(image)
80
-
80
+
81
81
# Convert to OpenCV format
82
82
image = convert_to_opencv(image)
83
83
```
@@ -109,40 +109,40 @@ There are a few steps you need to take to prepare an image for prediction. These
* Provide the project ID, iteration ID of the model you want to export.
42
-
* The *platform* parameter specifies the platform to export to: allowed values are `CoreML`, `TensorFlow`, `DockerFile`, `ONNX`, `VAIDK`, and `OpenVino`.
41
+
* Provide the project ID, iteration ID of the model you want to export.
42
+
* The *platform* parameter specifies the platform to export to: allowed values are `CoreML`, `TensorFlow`, `DockerFile`, `ONNX`, `VAIDK`, and `OpenVino`.
43
43
* The *flavor* parameter specifies the format of the exported model: allowed values are `Linux`, `Windows`, `ONNX10`, `ONNX12`, `ARM`, `TensorFlowNormal`, and `TensorFlowLite`.
44
44
* The *raw* parameter gives you the option to retrieve the raw JSON response along with the object model response.
45
45
@@ -93,5 +93,4 @@ Integrate your exported model into an application by exploring one of the follow
93
93
*[Use your ONNX model with Windows Machine Learning](custom-vision-onnx-windows-ml.md)
94
94
* See the sample for [CoreML model in an iOS application](https://go.microsoft.com/fwlink/?linkid=857726) for real-time image classification with Swift.
95
95
* See the sample for [TensorFlow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample) for real-time image classification on Android.
96
-
* See the sample for [CoreML model with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel) for real-time image classification in a Xamarin iOS app.
97
96
* See the sample for how to use the exported model [(VAIDK/OpenVino)](https://github.com/Azure-Samples/customvision-export-samples)
Copy file name to clipboardExpand all lines: articles/ai-services/custom-vision-service/export-your-model.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ To convert the domain of an existing model, follow these steps:
47
47
48
48
:::image type="content" source="media/export-your-model/gear-icon.png" alt-text="Screenshot that shows the gear icon.":::
49
49
50
-
1. In the **Domains** section, select one of the **compact** domains. Select **Save Changes** to save the changes.
50
+
1. In the **Domains** section, select one of the **compact** domains. Select **Save Changes** to save the changes.
51
51
52
52
> [!NOTE]
53
53
> For Vision AI Dev Kit, the project must be created with the **General (Compact)** domain, and you must specify the **Vision AI Dev Kit** option under the **Export Capabilities** section.
@@ -77,4 +77,3 @@ To integrate your exported model into an application, explore one of the followi
77
77
*[Use your ONNX model with Windows Machine Learning](custom-vision-onnx-windows-ml.md)
78
78
* See the Swift sample for [CoreML model in an iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
79
79
* See the Android sample for [TensorFlow model in an Android app](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample)
80
-
* See the Xamarin iOS sample for [CoreML model with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)
0 commit comments