You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Custom-Vision-Service/export-model-python.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,15 +9,15 @@ manager: nitinme
9
9
ms.service: cognitive-services
10
10
ms.subservice: custom-vision
11
11
ms.topic: how-to
12
-
ms.date: 01/05/2022
12
+
ms.date: 07/05/2022
13
13
ms.author: pafarley
14
14
ms.devlang: python
15
15
ms.custom: devx-track-python
16
16
---
17
17
18
18
# Tutorial: Run a TensorFlow model in Python
19
19
20
-
After you have[exported your TensorFlow model](./export-your-model.md) from the Custom Vision Service, this quickstart will show you how to use this model locally to classify images.
20
+
After you've[exported your TensorFlow model](./export-your-model.md) from the Custom Vision Service, this quickstart will show you how to use this model locally to classify images.
21
21
22
22
> [!NOTE]
23
23
> This tutorial applies only to models exported from "General (compact)" image classification projects. If you exported other models, please visit our [sample code repository](https://github.com/Azure-Samples/customvision-export-samples).
Copy file name to clipboardExpand all lines: articles/cognitive-services/Custom-Vision-Service/export-your-model.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,13 +9,13 @@ manager: nitinme
9
9
ms.service: cognitive-services
10
10
ms.subservice: custom-vision
11
11
ms.topic: how-to
12
-
ms.date: 10/27/2021
12
+
ms.date: 07/05/2022
13
13
ms.author: pafarley
14
14
---
15
15
16
16
# Export your model for use with mobile devices
17
17
18
-
Custom Vision Service allows classifiers to be exported to run offline. You can embed your exported classifier into an application and run it locally on a device for real-time classification.
18
+
Custom Vision Service lets you export your classifiers to be run offline. You can embed your exported classifier into an application and run it locally on a device for real-time classification.
Copy file name to clipboardExpand all lines: articles/cognitive-services/Custom-Vision-Service/getting-started-improving-your-classifier.md
+16-18Lines changed: 16 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ manager: nitinme
9
9
ms.service: cognitive-services
10
10
ms.subservice: custom-vision
11
11
ms.topic: how-to
12
-
ms.date: 02/09/2021
12
+
ms.date: 07/05/2022
13
13
ms.author: pafarley
14
14
ms.custom: cog-serv-seo-aug-2020
15
15
---
@@ -28,9 +28,7 @@ The following is a general pattern to help you train a more accurate model:
28
28
29
29
## Prevent overfitting
30
30
31
-
Sometimes, a model will learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you are creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
32
-
33
-

31
+
Sometimes a model will learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you're creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
34
32
35
33
To correct this problem, provide images with different angles, backgrounds, object size, groups, and other variations. The following sections expand upon these concepts.
36
34
@@ -44,35 +42,35 @@ It's also important to consider the relative quantities of your training data. F
44
42
45
43
## Data variety
46
44
47
-
Be sure to use images that are representative of what will be submitted to the classifier during normal use. Otherwise, your model could learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you are creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
45
+
Be sure to use images that are representative of what will be submitted to the classifier during normal use. Otherwise, your model could learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you're creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
48
46
49
-

47
+

50
48
51
49
To correct this problem, include a variety of images to ensure that your model can generalize well. Below are some ways you can make your training set more diverse:
52
50
53
51
*__Background:__ Provide images of your object in front of different backgrounds. Photos in natural contexts are better than photos in front of neutral backgrounds as they provide more information for the classifier.
54
52
55
-

53
+

56
54
57
-
*__Lighting:__ Provide images with varied lighting (that is, taken with flash, high exposure, and so on), especially if the images used for prediction have different lighting. It is also helpful to use images with varying saturation, hue, and brightness.
55
+
*__Lighting:__ Provide images with varied lighting (that is, taken with flash, high exposure, and so on), especially if the images used for prediction have different lighting. It's also helpful to use images with varying saturation, hue, and brightness.
58
56
59
-

57
+

60
58
61
59
*__Object Size:__ Provide images in which the objects vary in size and number (for example, a photo of bunches of bananas and a closeup of a single banana). Different sizing helps the classifier generalize better.
62
60
63
-

61
+

64
62
65
-
*__Camera Angle:__ Provide images taken with different camera angles. Alternatively, if all of your photos must be taken with fixed cameras (such as surveillance cameras), be sure to assign a different label to every regularly-occurring object to avoid overfitting—interpreting unrelated objects (such as lampposts) as the key feature.
63
+
*__Camera Angle:__ Provide images taken with different camera angles. Alternatively, if all of your photos must be taken with fixed cameras (such as surveillance cameras), be sure to assign a different label to every regularlyoccurring object to avoid overfitting—interpreting unrelated objects (such as lampposts) as the key feature.
66
64
67
-

65
+

68
66
69
67
*__Style:__ Provide images of different styles of the same class (for example, different varieties of the same fruit). However, if you have objects of drastically different styles (such as Mickey Mouse vs. a real-life mouse), we recommend you label them as separate classes to better represent their distinct features.
70
68
71
-

69
+

72
70
73
71
## Negative images (classifiers only)
74
72
75
-
If you're using an image classifier, you may need to add _negative samples_ to help make your classifier more accurate. Negative samples are images which do not match any of the other tags. When you upload these images, apply the special **Negative** label to them.
73
+
If you're using an image classifier, you may need to add _negative samples_ to help make your classifier more accurate. Negative samples are images that don't match any of the other tags. When you upload these images, apply the special **Negative** label to them.
76
74
77
75
Object detectors handle negative samples automatically, because any image areas outside of the drawn bounding boxes are considered negative.
78
76
@@ -81,9 +79,9 @@ Object detectors handle negative samples automatically, because any image areas
81
79
>
82
80
> On the other hand, in cases where the negative images are just a variation of the images used in training, it is likely that the model will classify the negative images as a labeled class due to the great similarities. For example, if you have an orange vs. grapefruit classifier, and you feed in an image of a clementine, it may score the clementine as an orange because many features of the clementine resemble those of oranges. If your negative images are of this nature, we recommend you create one or more additional tags (such as **Other**) and label the negative images with this tag during training to allow the model to better differentiate between these classes.
83
81
84
-
## Consider occlusion and truncation (object detectors only)
82
+
## Occlusion and truncation (object detectors only)
85
83
86
-
If you want your object detector to detect truncated objects (object is partially cut out of the image) or occluded objects (object is partially blocked by another object in the image), you'll need to include training images that cover those cases.
84
+
If you want your object detector to detect truncated objects (objects that are partially cut out of the image) or occluded objects (objects that are partially blocked by other objects in the image), you'll need to include training images that cover those cases.
87
85
88
86
> [!NOTE]
89
87
> The issue of objects being occluded by other objects is not to be confused with **Overlap Threshold**, a parameter for rating model performance. The **Overlap Threshold** slider on the [Custom Vision website](https://customvision.ai) deals with how much a predicted bounding box must overlap with the true bounding box to be considered correct.
@@ -96,9 +94,9 @@ When you use or test the model by submitting images to the prediction endpoint,
96
94
97
95

98
96
99
-
2. Hover over an image to see the tags that were predicted by the model. Images are sorted so that the ones which can bring the most improvements to the model are listed the top. To use a different sorting method, make a selection in the __Sort__ section.
97
+
2. Hover over an image to see the tags that were predicted by the model. Images are sorted so that the ones that can bring the most improvements to the model are listed the top. To use a different sorting method, make a selection in the __Sort__ section.
100
98
101
-
To add an image to your existing training data, select the image, set the correct tag(s), and click__Save and close__. The image will be removed from __Predictions__ and added to the set of training images. You can view it by selecting the __Training Images__ tab.
99
+
To add an image to your existing training data, select the image, set the correct tag(s), and select__Save and close__. The image will be removed from __Predictions__ and added to the set of training images. You can view it by selecting the __Training Images__ tab.
102
100
103
101

Copy file name to clipboardExpand all lines: articles/cognitive-services/Custom-Vision-Service/test-your-model.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ manager: nitinme
9
9
ms.service: cognitive-services
10
10
ms.subservice: custom-vision
11
11
ms.topic: how-to
12
-
ms.date: 10/27/2021
12
+
ms.date: 07/05/2022
13
13
ms.author: pafarley
14
14
---
15
15
@@ -23,7 +23,7 @@ After you train your Custom Vision model, you can quickly test it using a locall
23
23
24
24

25
25
26
-
2. In the **Quick Test** window, select in the **Submit Image** field and enter the URL of the image you want to use for your test. If you want to use a locally stored image instead, select the **Browse local files** button and select a local image file.
26
+
1. In the **Quick Test** window, select in the **Submit Image** field and enter the URL of the image you want to use for your test. If you want to use a locally stored image instead, select the **Browse local files** button and select a local image file.
27
27
28
28

29
29
@@ -40,7 +40,7 @@ You can now take the image submitted previously for testing and use it to retrai
40
40
> [!TIP]
41
41
> The default view shows images from the current iteration. You can use the __Iteration__ drop down field to view images submitted during previous iterations.
42
42
43
-
2. Hover over an image to see the tags that were predicted by the classifier.
43
+
1. Hover over an image to see the tags that were predicted by the classifier.
44
44
45
45
> [!TIP]
46
46
> Images are ranked, so that the images that can bring the most gains to the classifier are at the top. To select a different sorting, use the __Sort__ section.
@@ -49,7 +49,7 @@ You can now take the image submitted previously for testing and use it to retrai
49
49
50
50

51
51
52
-
3. Use the __Train__ button to retrain the classifier.
52
+
1. Use the __Train__ button to retrain the classifier.
0 commit comments