|
1 | 1 | ---
|
2 | 2 | title: "Select a domain for a Custom Vision project - Azure AI Vision"
|
3 | 3 | titleSuffix: Azure AI services
|
4 |
| -description: This article will show you how to select a domain for your project in the Custom Vision Service. |
| 4 | +description: Learn how to select a model domain for your project in the Custom Vision Service. |
5 | 5 | #services: cognitive-services
|
6 | 6 | author: PatrickFarley
|
7 | 7 | manager: nitinme
|
8 | 8 | ms.service: azure-ai-custom-vision
|
9 | 9 | ms.topic: how-to
|
10 |
| -ms.date: 01/21/2024 |
| 10 | +ms.date: 11/14/2024 |
11 | 11 | ms.author: pafarley
|
12 | 12 | ---
|
13 | 13 |
|
14 | 14 | # Select a domain for a Custom Vision project
|
15 | 15 |
|
16 |
| -This guide shows you how to select a domain for your project in the Custom Vision Service. |
| 16 | +This guide shows you how to select a domain for your project in the Custom Vision Service. Domains are used as the starting point for your project. |
17 | 17 |
|
18 |
| -From the **settings** tab of your project on the Custom Vision web portal, you can select a model domain for your project. You'll want to choose the domain that's closest to your use case scenario. If you're accessing Custom Vision through a client library or REST API, you'll need to specify a domain ID when creating the project. You can get a list of domain IDs with [Get Domains](/rest/api/customvision/get-domains). Or, use the table below. |
| 18 | +Sign in to your account on the [Custom Vision website](https://www.customvision.ai), then select your project. Select the **Settings** icon at the top right. On the **Project Settings** page, you can choose a model domain. You should choose the domain that's closest to your use case scenario. If you're accessing Custom Vision through a client library or REST API, you need to specify a domain ID when creating the project. You can get a list of domain IDs by using a [Get Domains](/rest/api/customvision/get-domains) request. Or, use the following table. |
19 | 19 |
|
20 |
| -## Image Classification domains |
| 20 | +## Image classification domains |
21 | 21 |
|
22 |
| -|Domain|Purpose| |
23 |
| -|---|---| |
24 |
| -|__General__| Optimized for a broad range of image classification tasks. If none of the other specific domains are appropriate, or if you're unsure of which domain to choose, select one of the General domains. ID: `ee85a74c-405e-4adc-bb47-ffa8ca0c9f31`| |
25 |
| -|__General [A1]__| Optimized for better accuracy with comparable inference time as General domain. Recommended for larger datasets or more difficult user scenarios. This domain requires more training time. ID: `a8e3c40f-fb4a-466f-832a-5e457ae4a344`| |
26 |
| -|__General [A2]__| Optimized for better accuracy with faster inference time than General[A1] and General domains. Recommended for most datasets. This domain requires less training time than General and General [A1] domains. ID: `2e37d7fb-3a54-486a-b4d6-cfc369af0018` | |
27 |
| -|__Food__|Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain. ID: `c151d5b5-dd07-472a-acc8-15d29dea8518`| |
28 |
| -|__Landmarks__|Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it. ID: `ca455789-012d-4b50-9fec-5bb63841c793`| |
29 |
| -|__Retail__|Optimized for images that are found in a shopping catalog or shopping website. If you want high-precision classifying between dresses, pants, and shirts, use this domain. ID: `b30a91ae-e3c1-4f73-a81e-c270bff27c39`| |
30 |
| -|__Compact domains__| Optimized for the constraints of real-time classification on edge devices.| |
| 22 | +|Domain|ID|Purpose| |
| 23 | +|---|---|---| |
| 24 | +|__General__|`ee85a74c-405e-4adc-bb47-ffa8ca0c9f31`| Optimized for a broad range of image classification tasks. If none of the other specific domains are appropriate, or if you're unsure of which domain to choose, select one of the **General** domains. | |
| 25 | +|__General [A1]__|`a8e3c40f-fb4a-466f-832a-5e457ae4a344`| Optimized for better accuracy with comparable inference time as the **General** domain. Recommended for larger datasets or more difficult user scenarios. This domain requires more training time. | |
| 26 | +|__General [A2]__|`2e37d7fb-3a54-486a-b4d6-cfc369af0018`| Optimized for better accuracy with faster inference time than **General [A1]** and **General** domains. Recommended for most datasets. This domain requires less training time than **General** and **General [A1]** domains. | |
| 27 | +|__Food__|`c151d5b5-dd07-472a-acc8-15d29dea8518`| Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the **Food** domain. | |
| 28 | +|__Landmarks__|`ca455789-012d-4b50-9fec-5bb63841c793`| Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it. | |
| 29 | +|__Retail__|`b30a91ae-e3c1-4f73-a81e-c270bff27c39`| Optimized for images that are found in a shopping catalog or shopping website. If you want high-precision classifying between dresses, pants, and shirts, use this domain. | |
| 30 | +|__Compact domains__| | Optimized for the constraints of real-time classification on edge devices. | |
31 | 31 |
|
32 | 32 |
|
33 | 33 | > [!NOTE]
|
34 |
| -> The General[A1] and General[A2] domains can be used for a broad set of scenarios and are optimized for accuracy. Use the General[A2] model for better inference speed and shorter training time. For larger datasets, you may want to use General[A1] to render better accuracy than General[A2], though it requires more training and inference time. The General model requires more inference time than both General[A1] and General[A2]. |
| 34 | +> The **General [A1]** and **General [A2]** domains can be used for a broad set of scenarios and are optimized for accuracy. Use the **General [A2]** model for better inference speed and shorter training time. For larger datasets, you might want to use **General [A1]** to render better accuracy than **General [A2]**, though it requires more training and inference time. The **General** model requires more inference time than both **General [A1]** and **General [A2]**. |
35 | 35 |
|
36 |
| -## Object Detection domains |
| 36 | +## Object detection domains |
37 | 37 |
|
38 |
| -|Domain|Purpose| |
39 |
| -|---|---| |
40 |
| -|__General__| Optimized for a broad range of object detection tasks. If none of the other domains are appropriate, or you are unsure of which domain to choose, select the General domain. ID: `da2e3a8a-40a5-4171-82f4-58522f70fbc1`| |
41 |
| -|__General [A1]__| Optimized for better accuracy with comparable inference time as General domain. Recommended for more accurate region location needs, larger datasets, or more difficult user scenarios. This domain requires more training time, and results are not deterministic: expect a +-1% mean Average Precision (mAP) difference with the same training data provided. ID: `9c616dff-2e7d-ea11-af59-1866da359ce6`| |
42 |
| -|__Logo__|Optimized for finding brand logos in images. ID: `1d8ffafe-ec40-4fb2-8f90-72b3b6cecea4`| |
43 |
| -|__Products on shelves__|Optimized for detecting and classifying products on shelves. ID: `3780a898-81c3-4516-81ae-3a139614e1f3`| |
44 |
| -|__Compact domains__| Optimized for the constraints of real-time object detection on edge devices.| |
| 38 | +|Domain|ID|Purpose| |
| 39 | +|---|---|---| |
| 40 | +|__General__|`da2e3a8a-40a5-4171-82f4-58522f70fbc1`| Optimized for a broad range of object detection tasks. If none of the other domains are appropriate, or you're unsure of which domain to choose, select the **General** domain. | |
| 41 | +|__General [A1]__|`9c616dff-2e7d-ea11-af59-1866da359ce6`| Optimized for better accuracy with comparable inference time as the **General** domain. Recommended for more accurate region location needs, larger datasets, or more difficult user scenarios. This domain requires more training time, and results aren't deterministic: expect a +-1% mean Average Precision (mAP) difference with the same training data provided. | |
| 42 | +|__Logo__|`1d8ffafe-ec40-4fb2-8f90-72b3b6cecea4`| Optimized for finding brand logos in images. | |
| 43 | +|__Products on shelves__|`3780a898-81c3-4516-81ae-3a139614e1f3`| Optimized for detecting and classifying products on shelves. | |
| 44 | +|__Compact domains__| | Optimized for the constraints of real-time object detection on edge devices.| |
45 | 45 |
|
46 | 46 | ## Compact domains
|
47 | 47 |
|
48 | 48 | The models generated by compact domains can be exported to run locally. In the Custom Vision 3.4 public preview API, you can get a list of the exportable platforms for compact domains by calling the GetDomains API.
|
49 | 49 |
|
50 |
| -All of the following domains support export in ONNX, TensorFlow,TensorFlowLite, TensorFlow.js, CoreML, and VAIDK formats, with the exception that the **Object Detection General (compact)** domain does not support VAIDK. |
| 50 | +All of the following domains support export in ONNX, TensorFlow, TensorFlowLite, TensorFlow.js, CoreML, and VAIDK formats, with the exception that the **Object Detection General (compact)** domain doesn't support VAIDK. |
51 | 51 |
|
52 |
| -Model performance varies by selected domain. In the table below, we report the model size and inference time on Intel Desktop CPU and NVidia GPU \[1\]. These numbers don't include preprocessing and postprocessing time. |
| 52 | +Model performance varies by selected domain. In the following table, we report the model size and inference time on Intel Desktop CPU and NVIDIA GPU \[1\]. These numbers don't include preprocessing and postprocessing time. |
53 | 53 |
|
54 | 54 | |Task|Domain|ID|Model Size|CPU inference time|GPU inference time|
|
55 | 55 | |---|---|---|---|---|---|
|
56 |
| -|Classification|General (compact)|`0732100f-1a38-4e49-a514-c9b44c697ab5`|6 MB|10 ms|5 ms| |
57 |
| -|Classification|General (compact) [S1]|`a1db07ca-a19a-4830-bae8-e004a42dc863`|43 MB|50 ms|5 ms| |
58 |
| -|Object Detection|General (compact)|`a27d5ca5-bb19-49d8-a70a-fec086c47f5b`|45 MB|35 ms|5 ms| |
59 |
| -|Object Detection|General (compact) [S1]|`7ec2ac80-887b-48a6-8df9-8b1357765430`|14 MB|27 ms|7 ms| |
| 56 | +|Classification|**General (compact)**|`0732100f-1a38-4e49-a514-c9b44c697ab5`|6 MB|10 ms|5 ms| |
| 57 | +|Classification|**General (compact) [S1]**|`a1db07ca-a19a-4830-bae8-e004a42dc863`|43 MB|50 ms|5 ms| |
| 58 | +|Object detection|**General (compact)**|`a27d5ca5-bb19-49d8-a70a-fec086c47f5b`|45 MB|35 ms|5 ms| |
| 59 | +|Object detection|**General (compact) [S1]**|`7ec2ac80-887b-48a6-8df9-8b1357765430`|14 MB|27 ms|7 ms| |
60 | 60 |
|
61 | 61 | >[!NOTE]
|
62 |
| ->The __General (compact)__ domain for Object Detection requires special postprocessing logic. For the detail, please see an example script in the exported zip package. If you need a model without the postprocessing logic, use __General (compact) [S1]__. |
| 62 | +>The __General (compact)__ domain for object detection requires special postprocessing logic. For details, please see an example script in the exported zip package. If you need a model without the postprocessing logic, use __General (compact) [S1]__. |
63 | 63 |
|
64 | 64 | >[!IMPORTANT]
|
65 |
| ->There is no guarantee that the exported models give the exactly same result as the prediction API on the cloud. Slight difference in the running platform or the preprocessing implementation can cause larger difference in the model outputs. For the detail of the preprocessing logic, please see [this document](quickstarts/image-classification.md). |
| 65 | +>There's no guarantee that the exported models give the exactly same result as the Prediction API on the cloud. Slight difference in the running platform or the preprocessing implementation can cause larger difference in the model outputs. For details about the preprocessing logic, see [Quickstart: Create an image classification project](quickstarts/image-classification.md). |
66 | 66 |
|
67 | 67 | \[1\] Intel Xeon E5-2690 CPU and NVIDIA Tesla M60
|
68 | 68 |
|
69 |
| -## Next steps |
| 69 | +## Related content |
70 | 70 |
|
71 | 71 | Follow a quickstart to get started creating and training a Custom Vision project.
|
72 | 72 |
|
73 |
| -* [Build a classifier](getting-started-build-a-classifier.md) |
| 73 | +* [Build an image classification model](getting-started-build-a-classifier.md) |
74 | 74 | * [Build an object detector](get-started-build-detector.md)
|
0 commit comments