Skip to content

Commit 7dbd6d3

Browse files
authored
Merge pull request #3815 from MicrosoftDocs/main
Auto Publish – main to live - 2025-03-29 05:12 (UTC)
2 parents 50a8092 + ffec2d1 commit 7dbd6d3

File tree

10 files changed

+54
-232
lines changed

10 files changed

+54
-232
lines changed

.openpublishing.redirection.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -264,6 +264,11 @@
264264
"source_path_from_root": "/articles/ai-services/openai/concepts/provisioned-reservation-update.md",
265265
"redirect_url": "/azure/ai-services/openai/concepts/provisioned-migration",
266266
"redirect_document_id": true
267+
},
268+
{
269+
"source_path_from_root": "/articles/ai-services/custom-vision-service/logo-detector-mobile.md",
270+
"redirect_url": "/azure/ai-services/custom-vision-service",
271+
"redirect_document_id": false
267272
}
268273
]
269274
}

articles/ai-services/custom-vision-service/custom-vision-onnx-windows-ml.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ The example application is available at the [Azure AI services ONNX Custom Visio
4040

4141
To use your own image classifier model, follow these steps:
4242

43-
1. Create and train a classifier with the Custom Vision Service. For instructions on how to do this, see [Create and train a classifier](./getting-started-build-a-classifier.md). Use one of the **compact** domains such as **General (compact)**.
43+
1. Create and train a classifier with the Custom Vision Service. For instructions on how to do this, see [Create and train a classifier](./getting-started-build-a-classifier.md). Use one of the **compact** domains such as **General (compact)**.
4444
* If you have an existing classifier that uses a different domain, you can convert it to **compact** in the project settings. Then, re-train your project before continuing.
4545
1. Export your model. Switch to the Performance tab and select an iteration that was trained with a **compact** domain. Select the **Export** button that appears. Then select **ONNX**, and then **Export**. Once the file is ready, select the **Download** button. For more information on export options, see [Export your model](./export-your-model.md).
4646
1. Open the downloaded *.zip* file and extract the *model.onnx* file from it. This file contains your classifier model.
@@ -59,6 +59,5 @@ To discover other ways to export and use a Custom Vision model, see the followin
5959
* [Export your model](./export-your-model.md)
6060
* [Use exported TensorFlow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample)
6161
* [Use exported CoreML model in a Swift iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
62-
* [Use exported CoreML model in an iOS application with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)
6362

6463
For more information on using ONNX models with Windows ML, see [Integrate a model into your app with Windows ML](/windows/ai/windows-ml/integrate-model).

articles/ai-services/custom-vision-service/export-model-python.md

Lines changed: 14 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ This guide shows you how to use an [exported TensorFlow model](./export-your-mod
3232
pip install numpy
3333
pip install opencv-python
3434
```
35-
35+
3636
## Load your model and tags
3737

3838
The downloaded _.zip_ file from the export step contains a _model.pb_ and a _labels.txt_ file. These files represent the trained model and the classification labels. The first step is to load the model into your project. Add the following code to a new Python script.
@@ -64,19 +64,19 @@ with open(labels_filename, 'rt') as lf:
6464
There are a few steps you need to take to prepare an image for prediction. These steps mimic the image manipulation performed during training.
6565
6666
1. Open the file and create an image in the BGR color space
67-
67+
6868
```Python
6969
from PIL import Image
7070
import numpy as np
7171
import cv2
72-
72+
7373
# Load from a file
7474
imageFile = "<path to your image file>"
7575
image = Image.open(imageFile)
76-
76+
7777
# Update orientation based on EXIF tags, if the file has orientation info.
7878
image = update_orientation(image)
79-
79+
8080
# Convert to OpenCV format
8181
image = convert_to_opencv(image)
8282
```
@@ -108,40 +108,40 @@ There are a few steps you need to take to prepare an image for prediction. These
108108
with tf.compat.v1.Session() as sess:
109109
input_tensor_shape = sess.graph.get_tensor_by_name('Placeholder:0').shape.as_list()
110110
network_input_size = input_tensor_shape[1]
111-
111+
112112
# Crop the center for the specified network_input_Size
113113
augmented_image = crop_center(augmented_image, network_input_size, network_input_size)
114-
114+
115115
```
116116
117117
1. Define helper functions. The steps above use the following helper functions:
118-
118+
119119
```Python
120120
def convert_to_opencv(image):
121121
# RGB -> BGR conversion is performed as well.
122122
image = image.convert('RGB')
123123
r,g,b = np.array(image).T
124124
opencv_image = np.array([b,g,r]).transpose()
125125
return opencv_image
126-
126+
127127
def crop_center(img,cropx,cropy):
128128
h, w = img.shape[:2]
129129
startx = w//2-(cropx//2)
130130
starty = h//2-(cropy//2)
131131
return img[starty:starty+cropy, startx:startx+cropx]
132-
132+
133133
def resize_down_to_1600_max_dim(image):
134134
h, w = image.shape[:2]
135135
if (h < 1600 and w < 1600):
136136
return image
137-
137+
138138
new_size = (1600 * w // h, 1600) if (h > w) else (1600, 1600 * h // w)
139139
return cv2.resize(image, new_size, interpolation = cv2.INTER_LINEAR)
140-
140+
141141
def resize_to_256_square(image):
142142
h, w = image.shape[:2]
143143
return cv2.resize(image, (256, 256), interpolation = cv2.INTER_LINEAR)
144-
144+
145145
def update_orientation(image):
146146
exif_orientation_tag = 0x0112
147147
if hasattr(image, '_getexif'):
@@ -158,7 +158,7 @@ There are a few steps you need to take to prepare an image for prediction. These
158158
image = image.transpose(Image.FLIP_LEFT_RIGHT)
159159
return image
160160
```
161-
161+
162162
## Classify an image
163163
164164
Once the image is prepared as a tensor, we can send it through the model for a prediction.
@@ -201,4 +201,3 @@ The results of running the image tensor through the model will then need to be m
201201
Next, learn how to wrap your model into a mobile application:
202202
* [Use your exported TensorFlow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample)
203203
* [Use your exported CoreML model in a Swift iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
204-
* [Use your exported CoreML model in an iOS application with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)

articles/ai-services/custom-vision-service/export-programmatically.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -32,13 +32,13 @@ trainer = CustomVisionTrainingClient(ENDPOINT, credentials)
3232
```
3333

3434
> [!IMPORTANT]
35-
> Remember to remove the keys from your code when youre done, and never post them publicly. For production, consider using a secure way of storing and accessing your credentials. For more information, see the Azure AI services [security](../security-features.md) article.
35+
> Remember to remove the keys from your code when you're done, and never post them publicly. For production, consider using a secure way of storing and accessing your credentials. For more information, see the Azure AI services [security](../security-features.md) article.
3636
3737
## Call the export method
3838

3939
Call the **export_iteration** method.
40-
* Provide the project ID, iteration ID of the model you want to export.
41-
* The *platform* parameter specifies the platform to export to: allowed values are `CoreML`, `TensorFlow`, `DockerFile`, `ONNX`, `VAIDK`, and `OpenVino`.
40+
* Provide the project ID, iteration ID of the model you want to export.
41+
* The *platform* parameter specifies the platform to export to: allowed values are `CoreML`, `TensorFlow`, `DockerFile`, `ONNX`, `VAIDK`, and `OpenVino`.
4242
* The *flavor* parameter specifies the format of the exported model: allowed values are `Linux`, `Windows`, `ONNX10`, `ONNX12`, `ARM`, `TensorFlowNormal`, and `TensorFlowLite`.
4343
* The *raw* parameter gives you the option to retrieve the raw JSON response along with the object model response.
4444

@@ -92,5 +92,4 @@ Integrate your exported model into an application by exploring one of the follow
9292
* [Use your ONNX model with Windows Machine Learning](custom-vision-onnx-windows-ml.md)
9393
* See the sample for [CoreML model in an iOS application](https://go.microsoft.com/fwlink/?linkid=857726) for real-time image classification with Swift.
9494
* See the sample for [TensorFlow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample) for real-time image classification on Android.
95-
* See the sample for [CoreML model with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel) for real-time image classification in a Xamarin iOS app.
9695
* See the sample for how to use the exported model [(VAIDK/OpenVino)](https://github.com/Azure-Samples/customvision-export-samples)

articles/ai-services/custom-vision-service/export-your-model.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ To convert the domain of an existing model, follow these steps:
4646

4747
:::image type="content" source="media/export-your-model/gear-icon.png" alt-text="Screenshot that shows the gear icon.":::
4848

49-
1. In the **Domains** section, select one of the **compact** domains. Select **Save Changes** to save the changes.
49+
1. In the **Domains** section, select one of the **compact** domains. Select **Save Changes** to save the changes.
5050

5151
> [!NOTE]
5252
> For Vision AI Dev Kit, the project must be created with the **General (Compact)** domain, and you must specify the **Vision AI Dev Kit** option under the **Export Capabilities** section.
@@ -76,4 +76,3 @@ To integrate your exported model into an application, explore one of the followi
7676
* [Use your ONNX model with Windows Machine Learning](custom-vision-onnx-windows-ml.md)
7777
* See the Swift sample for [CoreML model in an iOS application](https://go.microsoft.com/fwlink/?linkid=857726)
7878
* See the Android sample for [TensorFlow model in an Android app](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample)
79-
* See the Xamarin iOS sample for [CoreML model with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel)

articles/ai-services/custom-vision-service/index.yml

Lines changed: 1 addition & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ metadata:
1717
landingContent:
1818
# Cards and links should be based on top customer tasks or top subjects
1919
# Start card title with a verb
20-
# Card
20+
# Card
2121
- title: About Custom Vision
2222
linkLists:
2323
- linkListType: overview
@@ -61,10 +61,6 @@ landingContent:
6161
links:
6262
- text: Use the prediction API
6363
url: use-prediction-api.md
64-
- linkListType: tutorial
65-
links:
66-
- text: Logo detector for mobile
67-
url: logo-detector-mobile.md
6864

6965
- title: Test and improve models
7066
linkLists:
@@ -89,12 +85,6 @@ landingContent:
8985
url: custom-vision-onnx-windows-ml.md
9086
- text: Run TensorFlow model in Python
9187
url: export-model-python.md
92-
- linkListType: tutorial
93-
links:
94-
#- text: IoT Visual Alerts app
95-
# url: iot-visual-alerts-tutorial.md
96-
- text: Logo detector for mobile
97-
url: logo-detector-mobile.md
9888

9989
- title: Reference
10090
linkLists:

0 commit comments

Comments
 (0)