Skip to content

Commit 926c7f3

Browse files
committed
Remove Xamarin references.
1 parent c763ec4 commit 926c7f3

File tree

6 files changed

+19
-210
lines changed

6 files changed

+19
-210
lines changed

articles/ai-services/custom-vision-service/custom-vision-onnx-windows-ml.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ The example application is available at the [Azure AI services ONNX Custom Visio
4141

4242
To use your own image classifier model, follow these steps:
4343

44-
1. Create and train a classifier with the Custom Vision Service. For instructions on how to do this, see [Create and train a classifier](./getting-started-build-a-classifier.md). Use one of the **compact** domains such as **General (compact)**.
44+
1. Create and train a classifier with the Custom Vision Service. For instructions on how to do this, see [Create and train a classifier](./getting-started-build-a-classifier.md). Use one of the **compact** domains such as **General (compact)**.
4545
* If you have an existing classifier that uses a different domain, you can convert it to **compact** in the project settings. Then, re-train your project before continuing.
4646
1. Export your model. Switch to the Performance tab and select an iteration that was trained with a **compact** domain. Select the **Export** button that appears. Then select **ONNX**, and then **Export**. Once the file is ready, select the **Download** button. For more information on export options, see [Export your model](./export-your-model.md).
4747
1. Open the downloaded *.zip* file and extract the *model.onnx* file from it. This file contains your classifier model.
@@ -60,6 +60,5 @@ To discover other ways to export and use a Custom Vision model, see the followin
6060
* [Export your model](./export-your-model.md)
6161
* [Use exported TensorFlow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample)
6262
* [Use exported CoreML model in a Swift iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
63-
* [Use exported CoreML model in an iOS application with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)
6463

6564
For more information on using ONNX models with Windows ML, see [Integrate a model into your app with Windows ML](/windows/ai/windows-ml/integrate-model).

articles/ai-services/custom-vision-service/export-model-python.md

Lines changed: 14 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ This guide shows you how to use an [exported TensorFlow model](./export-your-mod
3333
pip install numpy
3434
pip install opencv-python
3535
```
36-
36+
3737
## Load your model and tags
3838

3939
The downloaded _.zip_ file from the export step contains a _model.pb_ and a _labels.txt_ file. These files represent the trained model and the classification labels. The first step is to load the model into your project. Add the following code to a new Python script.
@@ -65,19 +65,19 @@ with open(labels_filename, 'rt') as lf:
6565
There are a few steps you need to take to prepare an image for prediction. These steps mimic the image manipulation performed during training.
6666
6767
1. Open the file and create an image in the BGR color space
68-
68+
6969
```Python
7070
from PIL import Image
7171
import numpy as np
7272
import cv2
73-
73+
7474
# Load from a file
7575
imageFile = "<path to your image file>"
7676
image = Image.open(imageFile)
77-
77+
7878
# Update orientation based on EXIF tags, if the file has orientation info.
7979
image = update_orientation(image)
80-
80+
8181
# Convert to OpenCV format
8282
image = convert_to_opencv(image)
8383
```
@@ -109,40 +109,40 @@ There are a few steps you need to take to prepare an image for prediction. These
109109
with tf.compat.v1.Session() as sess:
110110
input_tensor_shape = sess.graph.get_tensor_by_name('Placeholder:0').shape.as_list()
111111
network_input_size = input_tensor_shape[1]
112-
112+
113113
# Crop the center for the specified network_input_Size
114114
augmented_image = crop_center(augmented_image, network_input_size, network_input_size)
115-
115+
116116
```
117117
118118
1. Define helper functions. The steps above use the following helper functions:
119-
119+
120120
```Python
121121
def convert_to_opencv(image):
122122
# RGB -> BGR conversion is performed as well.
123123
image = image.convert('RGB')
124124
r,g,b = np.array(image).T
125125
opencv_image = np.array([b,g,r]).transpose()
126126
return opencv_image
127-
127+
128128
def crop_center(img,cropx,cropy):
129129
h, w = img.shape[:2]
130130
startx = w//2-(cropx//2)
131131
starty = h//2-(cropy//2)
132132
return img[starty:starty+cropy, startx:startx+cropx]
133-
133+
134134
def resize_down_to_1600_max_dim(image):
135135
h, w = image.shape[:2]
136136
if (h < 1600 and w < 1600):
137137
return image
138-
138+
139139
new_size = (1600 * w // h, 1600) if (h > w) else (1600, 1600 * h // w)
140140
return cv2.resize(image, new_size, interpolation = cv2.INTER_LINEAR)
141-
141+
142142
def resize_to_256_square(image):
143143
h, w = image.shape[:2]
144144
return cv2.resize(image, (256, 256), interpolation = cv2.INTER_LINEAR)
145-
145+
146146
def update_orientation(image):
147147
exif_orientation_tag = 0x0112
148148
if hasattr(image, '_getexif'):
@@ -159,7 +159,7 @@ There are a few steps you need to take to prepare an image for prediction. These
159159
image = image.transpose(Image.FLIP_LEFT_RIGHT)
160160
return image
161161
```
162-
162+
163163
## Classify an image
164164
165165
Once the image is prepared as a tensor, we can send it through the model for a prediction.
@@ -202,4 +202,3 @@ The results of running the image tensor through the model will then need to be m
202202
Next, learn how to wrap your model into a mobile application:
203203
* [Use your exported TensorFlow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample)
204204
* [Use your exported CoreML model in a Swift iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
205-
* [Use your exported CoreML model in an iOS application with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)

articles/ai-services/custom-vision-service/export-programmatically.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -38,8 +38,8 @@ trainer = CustomVisionTrainingClient(ENDPOINT, credentials)
3838
## Call the export method
3939

4040
Call the **export_iteration** method.
41-
* Provide the project ID, iteration ID of the model you want to export.
42-
* The *platform* parameter specifies the platform to export to: allowed values are `CoreML`, `TensorFlow`, `DockerFile`, `ONNX`, `VAIDK`, and `OpenVino`.
41+
* Provide the project ID, iteration ID of the model you want to export.
42+
* The *platform* parameter specifies the platform to export to: allowed values are `CoreML`, `TensorFlow`, `DockerFile`, `ONNX`, `VAIDK`, and `OpenVino`.
4343
* The *flavor* parameter specifies the format of the exported model: allowed values are `Linux`, `Windows`, `ONNX10`, `ONNX12`, `ARM`, `TensorFlowNormal`, and `TensorFlowLite`.
4444
* The *raw* parameter gives you the option to retrieve the raw JSON response along with the object model response.
4545

@@ -93,5 +93,4 @@ Integrate your exported model into an application by exploring one of the follow
9393
* [Use your ONNX model with Windows Machine Learning](custom-vision-onnx-windows-ml.md)
9494
* See the sample for [CoreML model in an iOS application](https://go.microsoft.com/fwlink/?linkid=857726) for real-time image classification with Swift.
9595
* See the sample for [TensorFlow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample) for real-time image classification on Android.
96-
* See the sample for [CoreML model with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel) for real-time image classification in a Xamarin iOS app.
9796
* See the sample for how to use the exported model [(VAIDK/OpenVino)](https://github.com/Azure-Samples/customvision-export-samples)

articles/ai-services/custom-vision-service/export-your-model.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ To convert the domain of an existing model, follow these steps:
4747

4848
:::image type="content" source="media/export-your-model/gear-icon.png" alt-text="Screenshot that shows the gear icon.":::
4949

50-
1. In the **Domains** section, select one of the **compact** domains. Select **Save Changes** to save the changes.
50+
1. In the **Domains** section, select one of the **compact** domains. Select **Save Changes** to save the changes.
5151

5252
> [!NOTE]
5353
> For Vision AI Dev Kit, the project must be created with the **General (Compact)** domain, and you must specify the **Vision AI Dev Kit** option under the **Export Capabilities** section.
@@ -77,4 +77,3 @@ To integrate your exported model into an application, explore one of the followi
7777
* [Use your ONNX model with Windows Machine Learning](custom-vision-onnx-windows-ml.md)
7878
* See the Swift sample for [CoreML model in an iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
7979
* See the Android sample for [TensorFlow model in an Android app](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample)
80-
* See the Xamarin iOS sample for [CoreML model with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)

0 commit comments

Comments
 (0)