You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/how-to/find-similar-faces.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -101,4 +101,4 @@ Run the command, and the returned JSON should show the correct face ID as a simi
101
101
In this guide, you learned how to call the Find Similar API to do a face search by similarity in a larger group of faces. Next, learn more about the different recognition models available for face comparison operations.
102
102
103
103
> [!div class="nextstepaction"]
104
-
> [Specify a face recognition model](specify-recognition-model.md)
104
+
> [Specify a face recognition model](./specify-recognition-model.md)
Copy file name to clipboardExpand all lines: articles/ai-services/custom-vision-service/copy-move-projects.md
+8-4Lines changed: 8 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,8 +6,10 @@ author: PatrickFarley
6
6
manager: nitinme
7
7
ms.service: azure-ai-custom-vision
8
8
ms.topic: how-to
9
-
ms.date: 01/22/2024
9
+
ms.date: 01/22/2025
10
10
ms.author: pafarley
11
+
#customer intent: As a developer, I want to copy and back up Custom Vision projects so that I can ensure project availability and disaster recovery.
12
+
11
13
---
12
14
13
15
# Copy and back up your Custom Vision projects
@@ -29,7 +31,7 @@ The **[ExportProject](/rest/api/customvision/projects/export)** and **[ImportPro
29
31
- A created Custom Vision project. See [Build a classifier](./getting-started-build-a-classifier.md) for instructions on how to do this.
30
32
*[PowerShell version 6.0+](/powershell/scripting/install/installing-powershell-core-on-windows), or a similar command-line utility.
31
33
32
-
## Process overview
34
+
## Understand the process
33
35
34
36
The process for copying a project consists of the following steps:
35
37
@@ -137,7 +139,9 @@ You'll get a `200/OK` response with metadata about your newly imported project.
137
139
}
138
140
```
139
141
140
-
## Next steps
142
+
## Next step
141
143
142
144
In this guide, you learned how to copy and move a project between Custom Vision resources. Next, explore the API reference docs to see what else you can do with Custom Vision.
143
-
*[REST API reference documentation](/rest/api/custom-vision/)
145
+
146
+
> [!div class="nextstepaction"]
147
+
> [REST API reference documentation](/rest/api/custom-vision/)
Copy file name to clipboardExpand all lines: articles/ai-services/custom-vision-service/getting-started-improving-your-classifier.md
+9-8Lines changed: 9 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,15 @@
1
1
---
2
-
title: Improving your model - Custom Vision service
2
+
title: Improve your model - Custom Vision service
3
3
titleSuffix: Azure AI services
4
4
description: In this article you'll learn how the amount, quality and variety of data can improve the quality of your model in the Custom Vision service.
5
+
#customer intent: As a developer, I want to improve my Custom Vision model so that it performs better with real-world data.
5
6
#services: cognitive-services
6
7
author: PatrickFarley
7
8
manager: nitinme
8
9
9
10
ms.service: azure-ai-custom-vision
10
11
ms.topic: how-to
11
-
ms.date: 02/14/2024
12
+
ms.date: 01/22/2025
12
13
ms.author: pafarley
13
14
---
14
15
@@ -30,15 +31,15 @@ Sometimes a model will learn to make predictions based on arbitrary characterist
30
31
31
32
To correct this problem, provide images with different angles, backgrounds, object size, groups, and other variations. The following sections expand upon these concepts.
32
33
33
-
## Data quantity
34
+
## Ensure data quantity
34
35
35
36
The number of training images is the most important factor for your dataset. We recommend using at least 50 images per label as a starting point. With fewer images, there's a higher risk of overfitting, and while your performance numbers may suggest good quality, your model may struggle with real-world data.
36
37
37
-
## Data balance
38
+
## Ensure data balance
38
39
39
40
It's also important to consider the relative quantities of your training data. For instance, using 500 images for one label and 50 images for another label makes for an imbalanced training dataset. This will cause the model to be more accurate in predicting one label than another. You're likely to see better results if you maintain at least a 1:2 ratio between the label with the fewest images and the label with the most images. For example, if the label with the most images has 500 images, the label with the least images should have at least 250 images for training.
40
41
41
-
## Data variety
42
+
## Ensure data variety
42
43
43
44
Be sure to use images that are representative of what will be submitted to the classifier during normal use. Otherwise, your model could learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you're creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
44
45
@@ -66,7 +67,7 @@ To correct this problem, include a variety of images to ensure that your model c
66
67
67
68

68
69
69
-
## Negative images (classifiers only)
70
+
## Use negative images (classifiers only)
70
71
71
72
If you're using an image classifier, you might need to add _negative samples_ to help make your classifier more accurate. Negative samples are images that don't match any of the other tags. When you upload these images, apply the special **Negative** label to them.
72
73
@@ -77,7 +78,7 @@ Object detectors handle negative samples automatically, because any image areas
77
78
>
78
79
> On the other hand, in cases where the negative images are just a variation of the images used in training, it is likely that the model will classify the negative images as a labeled class due to the great similarities. For example, if you have an orange vs. grapefruit classifier, and you feed in an image of a clementine, it may score the clementine as an orange because many features of the clementine resemble those of oranges. If your negative images are of this nature, we recommend you create one or more additional tags (such as **Other**) and label the negative images with this tag during training to allow the model to better differentiate between these classes.
79
80
80
-
## Occlusion and truncation (object detectors only)
81
+
## Handle occlusion and truncation (object detectors only)
81
82
82
83
If you want your object detector to detect truncated objects (objects that are partially cut out of the image) or occluded objects (objects that are partially blocked by other objects in the image), you'll need to include training images that cover those cases.
83
84
@@ -108,7 +109,7 @@ To inspect image predictions, go to the __Training Images__ tab, select your pre
108
109
109
110
Sometimes a visual inspection can identify patterns that you can then correct by adding more training data or modifying existing training data. For example, a classifier for apples vs. limes may incorrectly label all green apples as limes. You can then correct this problem by adding and providing training data that contains tagged images of green apples.
110
111
111
-
## Next steps
112
+
## Next step
112
113
113
114
In this guide, you learned several techniques to make your custom image classification model or object detector model more accurate. Next, learn how to test images programmatically by submitting them to the Prediction API.
Copy file name to clipboardExpand all lines: articles/ai-services/custom-vision-service/includes/quickstarts/go-tutorial-object-detection.md
-8Lines changed: 0 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,14 +11,6 @@ This guide provides instructions and sample code to help you get started using t
11
11
> [!NOTE]
12
12
> If you want to build and train an object detection model _without_ writing code, see the [browser-based guidance](../../get-started-build-detector.md) instead.
Copy file name to clipboardExpand all lines: articles/ai-services/custom-vision-service/includes/quickstarts/java-tutorial-od.md
-9Lines changed: 0 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,15 +12,6 @@ Get started using the Custom Vision client library for Java to build an object d
12
12
> [!NOTE]
13
13
> If you want to build and train an object detection model _without_ writing code, see the [browser-based guidance](../../get-started-build-detector.md) instead.
Copy file name to clipboardExpand all lines: articles/ai-services/custom-vision-service/includes/quickstarts/node-tutorial-object-detection.md
-8Lines changed: 0 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,14 +12,6 @@ This guide provides instructions and sample code to help you get started using t
12
12
> [!NOTE]
13
13
> If you want to build and train an object detection model _without_ writing code, see the [browser-based guidance](../../get-started-build-detector.md) instead.
Copy file name to clipboardExpand all lines: articles/ai-services/custom-vision-service/includes/quickstarts/python-tutorial-od.md
-8Lines changed: 0 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,14 +11,6 @@ Get started with the Custom Vision client library for Python. Follow these steps
11
11
> [!NOTE]
12
12
> If you want to build and train an object detection model _without_ writing code, see the [browser-based guidance](../../get-started-build-detector.md) instead.
13
13
14
-
Use the Custom Vision client library for Python to:
Copy file name to clipboardExpand all lines: articles/ai-services/custom-vision-service/limits-and-quotas.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,13 +8,13 @@ manager: nitinme
8
8
9
9
ms.service: azure-ai-custom-vision
10
10
ms.topic: conceptual
11
-
ms.date: 01/21/2024
11
+
ms.date: 01/22/2025
12
12
ms.author: pafarley
13
13
---
14
14
15
15
# Limits and quotas
16
16
17
-
There are two tiers of keys for the Custom Vision service. You can sign up for a F0 (free) or S0 (standard) subscription through the Azure portal. This page outlines the limitations of each tier. See the [Azure AI services pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) for more details on pricing and transactions.
17
+
There are two tiers of subscription to the Custom Vision service. You can sign up for an F0 (free) or S0 (standard) subscription through the Azure portal. This page outlines the limitations of each tier. See the [Custom Vision pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/custom-vision-service/) for more details on pricing and transactions.
18
18
19
19
|Factor|**F0 (free)**|**S0 (standard)**|
20
20
|-----|-----|-----|
@@ -42,4 +42,4 @@ There are two tiers of keys for the Custom Vision service. You can sign up for a
42
42
> Images smaller than 256 pixels will be accepted but upscaled.
43
43
44
44
> [!NOTE]
45
-
> The image aspect ratio should not be larger than 25:1.
45
+
> The image aspect ratio shouldn't be larger than 25:1.
0 commit comments