Skip to content

Commit 7aedf51

Browse files
authored
Merge pull request #1564 from cdpark/refresh-oct-patfarley-5
Feature 322546: Q&M: Freshness - AI Services & Search 180d target - Oct sprint - patfarley-5
2 parents 8dc0bf4 + c5fbbf9 commit 7aedf51

File tree

2 files changed

+51
-51
lines changed

2 files changed

+51
-51
lines changed
Lines changed: 33 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -1,74 +1,74 @@
11
---
22
title: "Select a domain for a Custom Vision project - Azure AI Vision"
33
titleSuffix: Azure AI services
4-
description: This article will show you how to select a domain for your project in the Custom Vision Service.
4+
description: Learn how to select a model domain for your project in the Custom Vision Service.
55
#services: cognitive-services
66
author: PatrickFarley
77
manager: nitinme
88
ms.service: azure-ai-custom-vision
99
ms.topic: how-to
10-
ms.date: 01/21/2024
10+
ms.date: 11/14/2024
1111
ms.author: pafarley
1212
---
1313

1414
# Select a domain for a Custom Vision project
1515

16-
This guide shows you how to select a domain for your project in the Custom Vision Service.
16+
This guide shows you how to select a domain for your project in the Custom Vision Service. Domains are used as the starting point for your project.
1717

18-
From the **settings** tab of your project on the Custom Vision web portal, you can select a model domain for your project. You'll want to choose the domain that's closest to your use case scenario. If you're accessing Custom Vision through a client library or REST API, you'll need to specify a domain ID when creating the project. You can get a list of domain IDs with [Get Domains](/rest/api/customvision/get-domains). Or, use the table below.
18+
Sign in to your account on the [Custom Vision website](https://www.customvision.ai), then select your project. Select the **Settings** icon at the top right. On the **Project Settings** page, you can choose a model domain. You should choose the domain that's closest to your use case scenario. If you're accessing Custom Vision through a client library or REST API, you need to specify a domain ID when creating the project. You can get a list of domain IDs by using a [Get Domains](/rest/api/customvision/get-domains) request. Or, use the following table.
1919

20-
## Image Classification domains
20+
## Image classification domains
2121

22-
|Domain|Purpose|
23-
|---|---|
24-
|__General__| Optimized for a broad range of image classification tasks. If none of the other specific domains are appropriate, or if you're unsure of which domain to choose, select one of the General domains. ID: `ee85a74c-405e-4adc-bb47-ffa8ca0c9f31`|
25-
|__General [A1]__| Optimized for better accuracy with comparable inference time as General domain. Recommended for larger datasets or more difficult user scenarios. This domain requires more training time. ID: `a8e3c40f-fb4a-466f-832a-5e457ae4a344`|
26-
|__General [A2]__| Optimized for better accuracy with faster inference time than General[A1] and General domains. Recommended for most datasets. This domain requires less training time than General and General [A1] domains. ID: `2e37d7fb-3a54-486a-b4d6-cfc369af0018` |
27-
|__Food__|Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain. ID: `c151d5b5-dd07-472a-acc8-15d29dea8518`|
28-
|__Landmarks__|Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it. ID: `ca455789-012d-4b50-9fec-5bb63841c793`|
29-
|__Retail__|Optimized for images that are found in a shopping catalog or shopping website. If you want high-precision classifying between dresses, pants, and shirts, use this domain. ID: `b30a91ae-e3c1-4f73-a81e-c270bff27c39`|
30-
|__Compact domains__| Optimized for the constraints of real-time classification on edge devices.|
22+
|Domain|ID|Purpose|
23+
|---|---|---|
24+
|__General__|`ee85a74c-405e-4adc-bb47-ffa8ca0c9f31`| Optimized for a broad range of image classification tasks. If none of the other specific domains are appropriate, or if you're unsure of which domain to choose, select one of the **General** domains. |
25+
|__General [A1]__|`a8e3c40f-fb4a-466f-832a-5e457ae4a344`| Optimized for better accuracy with comparable inference time as the **General** domain. Recommended for larger datasets or more difficult user scenarios. This domain requires more training time. |
26+
|__General [A2]__|`2e37d7fb-3a54-486a-b4d6-cfc369af0018`| Optimized for better accuracy with faster inference time than **General [A1]** and **General** domains. Recommended for most datasets. This domain requires less training time than **General** and **General [A1]** domains. |
27+
|__Food__|`c151d5b5-dd07-472a-acc8-15d29dea8518`| Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the **Food** domain. |
28+
|__Landmarks__|`ca455789-012d-4b50-9fec-5bb63841c793`| Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it. |
29+
|__Retail__|`b30a91ae-e3c1-4f73-a81e-c270bff27c39`| Optimized for images that are found in a shopping catalog or shopping website. If you want high-precision classifying between dresses, pants, and shirts, use this domain. |
30+
|__Compact domains__| | Optimized for the constraints of real-time classification on edge devices. |
3131

3232

3333
> [!NOTE]
34-
> The General[A1] and General[A2] domains can be used for a broad set of scenarios and are optimized for accuracy. Use the General[A2] model for better inference speed and shorter training time. For larger datasets, you may want to use General[A1] to render better accuracy than General[A2], though it requires more training and inference time. The General model requires more inference time than both General[A1] and General[A2].
34+
> The **General [A1]** and **General [A2]** domains can be used for a broad set of scenarios and are optimized for accuracy. Use the **General [A2]** model for better inference speed and shorter training time. For larger datasets, you might want to use **General [A1]** to render better accuracy than **General [A2]**, though it requires more training and inference time. The **General** model requires more inference time than both **General [A1]** and **General [A2]**.
3535
36-
## Object Detection domains
36+
## Object detection domains
3737

38-
|Domain|Purpose|
39-
|---|---|
40-
|__General__| Optimized for a broad range of object detection tasks. If none of the other domains are appropriate, or you are unsure of which domain to choose, select the General domain. ID: `da2e3a8a-40a5-4171-82f4-58522f70fbc1`|
41-
|__General [A1]__| Optimized for better accuracy with comparable inference time as General domain. Recommended for more accurate region location needs, larger datasets, or more difficult user scenarios. This domain requires more training time, and results are not deterministic: expect a +-1% mean Average Precision (mAP) difference with the same training data provided. ID: `9c616dff-2e7d-ea11-af59-1866da359ce6`|
42-
|__Logo__|Optimized for finding brand logos in images. ID: `1d8ffafe-ec40-4fb2-8f90-72b3b6cecea4`|
43-
|__Products on shelves__|Optimized for detecting and classifying products on shelves. ID: `3780a898-81c3-4516-81ae-3a139614e1f3`|
44-
|__Compact domains__| Optimized for the constraints of real-time object detection on edge devices.|
38+
|Domain|ID|Purpose|
39+
|---|---|---|
40+
|__General__|`da2e3a8a-40a5-4171-82f4-58522f70fbc1`| Optimized for a broad range of object detection tasks. If none of the other domains are appropriate, or you're unsure of which domain to choose, select the **General** domain. |
41+
|__General [A1]__|`9c616dff-2e7d-ea11-af59-1866da359ce6`| Optimized for better accuracy with comparable inference time as the **General** domain. Recommended for more accurate region location needs, larger datasets, or more difficult user scenarios. This domain requires more training time, and results aren't deterministic: expect a +-1% mean Average Precision (mAP) difference with the same training data provided. |
42+
|__Logo__|`1d8ffafe-ec40-4fb2-8f90-72b3b6cecea4`| Optimized for finding brand logos in images. |
43+
|__Products on shelves__|`3780a898-81c3-4516-81ae-3a139614e1f3`| Optimized for detecting and classifying products on shelves. |
44+
|__Compact domains__| | Optimized for the constraints of real-time object detection on edge devices.|
4545

4646
## Compact domains
4747

4848
The models generated by compact domains can be exported to run locally. In the Custom Vision 3.4 public preview API, you can get a list of the exportable platforms for compact domains by calling the GetDomains API.
4949

50-
All of the following domains support export in ONNX, TensorFlow,TensorFlowLite, TensorFlow.js, CoreML, and VAIDK formats, with the exception that the **Object Detection General (compact)** domain does not support VAIDK.
50+
All of the following domains support export in ONNX, TensorFlow, TensorFlowLite, TensorFlow.js, CoreML, and VAIDK formats, with the exception that the **Object Detection General (compact)** domain doesn't support VAIDK.
5151

52-
Model performance varies by selected domain. In the table below, we report the model size and inference time on Intel Desktop CPU and NVidia GPU \[1\]. These numbers don't include preprocessing and postprocessing time.
52+
Model performance varies by selected domain. In the following table, we report the model size and inference time on Intel Desktop CPU and NVIDIA GPU \[1\]. These numbers don't include preprocessing and postprocessing time.
5353

5454
|Task|Domain|ID|Model Size|CPU inference time|GPU inference time|
5555
|---|---|---|---|---|---|
56-
|Classification|General (compact)|`0732100f-1a38-4e49-a514-c9b44c697ab5`|6 MB|10 ms|5 ms|
57-
|Classification|General (compact) [S1]|`a1db07ca-a19a-4830-bae8-e004a42dc863`|43 MB|50 ms|5 ms|
58-
|Object Detection|General (compact)|`a27d5ca5-bb19-49d8-a70a-fec086c47f5b`|45 MB|35 ms|5 ms|
59-
|Object Detection|General (compact) [S1]|`7ec2ac80-887b-48a6-8df9-8b1357765430`|14 MB|27 ms|7 ms|
56+
|Classification|**General (compact)**|`0732100f-1a38-4e49-a514-c9b44c697ab5`|6 MB|10 ms|5 ms|
57+
|Classification|**General (compact) [S1]**|`a1db07ca-a19a-4830-bae8-e004a42dc863`|43 MB|50 ms|5 ms|
58+
|Object detection|**General (compact)**|`a27d5ca5-bb19-49d8-a70a-fec086c47f5b`|45 MB|35 ms|5 ms|
59+
|Object detection|**General (compact) [S1]**|`7ec2ac80-887b-48a6-8df9-8b1357765430`|14 MB|27 ms|7 ms|
6060

6161
>[!NOTE]
62-
>The __General (compact)__ domain for Object Detection requires special postprocessing logic. For the detail, please see an example script in the exported zip package. If you need a model without the postprocessing logic, use __General (compact) [S1]__.
62+
>The __General (compact)__ domain for object detection requires special postprocessing logic. For details, please see an example script in the exported zip package. If you need a model without the postprocessing logic, use __General (compact) [S1]__.
6363
6464
>[!IMPORTANT]
65-
>There is no guarantee that the exported models give the exactly same result as the prediction API on the cloud. Slight difference in the running platform or the preprocessing implementation can cause larger difference in the model outputs. For the detail of the preprocessing logic, please see [this document](quickstarts/image-classification.md).
65+
>There's no guarantee that the exported models give the exactly same result as the Prediction API on the cloud. Slight difference in the running platform or the preprocessing implementation can cause larger difference in the model outputs. For details about the preprocessing logic, see [Quickstart: Create an image classification project](quickstarts/image-classification.md).
6666
6767
\[1\] Intel Xeon E5-2690 CPU and NVIDIA Tesla M60
6868

69-
## Next steps
69+
## Related content
7070

7171
Follow a quickstart to get started creating and training a Custom Vision project.
7272

73-
* [Build a classifier](getting-started-build-a-classifier.md)
73+
* [Build an image classification model](getting-started-build-a-classifier.md)
7474
* [Build an object detector](get-started-build-detector.md)

0 commit comments

Comments
 (0)