Skip to content

Commit 7287f39

Browse files
committed
cusvis freshness
1 parent 2d0fa30 commit 7287f39

File tree

5 files changed

+26
-28
lines changed

5 files changed

+26
-28
lines changed

articles/cognitive-services/Custom-Vision-Service/export-model-python.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,15 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: custom-vision
1111
ms.topic: how-to
12-
ms.date: 01/05/2022
12+
ms.date: 07/05/2022
1313
ms.author: pafarley
1414
ms.devlang: python
1515
ms.custom: devx-track-python
1616
---
1717

1818
# Tutorial: Run a TensorFlow model in Python
1919

20-
After you have [exported your TensorFlow model](./export-your-model.md) from the Custom Vision Service, this quickstart will show you how to use this model locally to classify images.
20+
After you've [exported your TensorFlow model](./export-your-model.md) from the Custom Vision Service, this quickstart will show you how to use this model locally to classify images.
2121

2222
> [!NOTE]
2323
> This tutorial applies only to models exported from "General (compact)" image classification projects. If you exported other models, please visit our [sample code repository](https://github.com/Azure-Samples/customvision-export-samples).

articles/cognitive-services/Custom-Vision-Service/export-your-model.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,13 +9,13 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: custom-vision
1111
ms.topic: how-to
12-
ms.date: 10/27/2021
12+
ms.date: 07/05/2022
1313
ms.author: pafarley
1414
---
1515

1616
# Export your model for use with mobile devices
1717

18-
Custom Vision Service allows classifiers to be exported to run offline. You can embed your exported classifier into an application and run it locally on a device for real-time classification.
18+
Custom Vision Service lets you export your classifiers to be run offline. You can embed your exported classifier into an application and run it locally on a device for real-time classification.
1919

2020
## Export options
2121

articles/cognitive-services/Custom-Vision-Service/getting-started-improving-your-classifier.md

Lines changed: 16 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: custom-vision
1111
ms.topic: how-to
12-
ms.date: 02/09/2021
12+
ms.date: 07/05/2022
1313
ms.author: pafarley
1414
ms.custom: cog-serv-seo-aug-2020
1515
---
@@ -28,9 +28,7 @@ The following is a general pattern to help you train a more accurate model:
2828

2929
## Prevent overfitting
3030

31-
Sometimes, a model will learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you are creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
32-
33-
![Image of unexpected classification](./media/getting-started-improving-your-classifier/unexpected.png)
31+
Sometimes a model will learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you're creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
3432

3533
To correct this problem, provide images with different angles, backgrounds, object size, groups, and other variations. The following sections expand upon these concepts.
3634

@@ -44,35 +42,35 @@ It's also important to consider the relative quantities of your training data. F
4442

4543
## Data variety
4644

47-
Be sure to use images that are representative of what will be submitted to the classifier during normal use. Otherwise, your model could learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you are creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
45+
Be sure to use images that are representative of what will be submitted to the classifier during normal use. Otherwise, your model could learn to make predictions based on arbitrary characteristics that your images have in common. For example, if you're creating a classifier for apples vs. citrus, and you've used images of apples in hands and of citrus on white plates, the classifier may give undue importance to hands vs. plates, rather than apples vs. citrus.
4846

49-
![Image of unexpected classification](./media/getting-started-improving-your-classifier/unexpected.png)
47+
![Image of fruits with unexpected matching.](./media/getting-started-improving-your-classifier/unexpected.png)
5048

5149
To correct this problem, include a variety of images to ensure that your model can generalize well. Below are some ways you can make your training set more diverse:
5250

5351
* __Background:__ Provide images of your object in front of different backgrounds. Photos in natural contexts are better than photos in front of neutral backgrounds as they provide more information for the classifier.
5452

55-
![Image of background samples](./media/getting-started-improving-your-classifier/background.png)
53+
![Image of background samples.](./media/getting-started-improving-your-classifier/background.png)
5654

57-
* __Lighting:__ Provide images with varied lighting (that is, taken with flash, high exposure, and so on), especially if the images used for prediction have different lighting. It is also helpful to use images with varying saturation, hue, and brightness.
55+
* __Lighting:__ Provide images with varied lighting (that is, taken with flash, high exposure, and so on), especially if the images used for prediction have different lighting. It's also helpful to use images with varying saturation, hue, and brightness.
5856

59-
![Image of lighting samples](./media/getting-started-improving-your-classifier/lighting.png)
57+
![Image of lighting samples.](./media/getting-started-improving-your-classifier/lighting.png)
6058

6159
* __Object Size:__ Provide images in which the objects vary in size and number (for example, a photo of bunches of bananas and a closeup of a single banana). Different sizing helps the classifier generalize better.
6260

63-
![Image of size samples](./media/getting-started-improving-your-classifier/size.png)
61+
![Image of size samples.](./media/getting-started-improving-your-classifier/size.png)
6462

65-
* __Camera Angle:__ Provide images taken with different camera angles. Alternatively, if all of your photos must be taken with fixed cameras (such as surveillance cameras), be sure to assign a different label to every regularly-occurring object to avoid overfitting—interpreting unrelated objects (such as lampposts) as the key feature.
63+
* __Camera Angle:__ Provide images taken with different camera angles. Alternatively, if all of your photos must be taken with fixed cameras (such as surveillance cameras), be sure to assign a different label to every regularly occurring object to avoid overfitting—interpreting unrelated objects (such as lampposts) as the key feature.
6664

67-
![Image of angle samples](./media/getting-started-improving-your-classifier/angle.png)
65+
![Image of angle samples.](./media/getting-started-improving-your-classifier/angle.png)
6866

6967
* __Style:__ Provide images of different styles of the same class (for example, different varieties of the same fruit). However, if you have objects of drastically different styles (such as Mickey Mouse vs. a real-life mouse), we recommend you label them as separate classes to better represent their distinct features.
7068

71-
![Image of style samples](./media/getting-started-improving-your-classifier/style.png)
69+
![Image of style samples.](./media/getting-started-improving-your-classifier/style.png)
7270

7371
## Negative images (classifiers only)
7472

75-
If you're using an image classifier, you may need to add _negative samples_ to help make your classifier more accurate. Negative samples are images which do not match any of the other tags. When you upload these images, apply the special **Negative** label to them.
73+
If you're using an image classifier, you may need to add _negative samples_ to help make your classifier more accurate. Negative samples are images that don't match any of the other tags. When you upload these images, apply the special **Negative** label to them.
7674

7775
Object detectors handle negative samples automatically, because any image areas outside of the drawn bounding boxes are considered negative.
7876

@@ -81,9 +79,9 @@ Object detectors handle negative samples automatically, because any image areas
8179
>
8280
> On the other hand, in cases where the negative images are just a variation of the images used in training, it is likely that the model will classify the negative images as a labeled class due to the great similarities. For example, if you have an orange vs. grapefruit classifier, and you feed in an image of a clementine, it may score the clementine as an orange because many features of the clementine resemble those of oranges. If your negative images are of this nature, we recommend you create one or more additional tags (such as **Other**) and label the negative images with this tag during training to allow the model to better differentiate between these classes.
8381
84-
## Consider occlusion and truncation (object detectors only)
82+
## Occlusion and truncation (object detectors only)
8583

86-
If you want your object detector to detect truncated objects (object is partially cut out of the image) or occluded objects (object is partially blocked by another object in the image), you'll need to include training images that cover those cases.
84+
If you want your object detector to detect truncated objects (objects that are partially cut out of the image) or occluded objects (objects that are partially blocked by other objects in the image), you'll need to include training images that cover those cases.
8785

8886
> [!NOTE]
8987
> The issue of objects being occluded by other objects is not to be confused with **Overlap Threshold**, a parameter for rating model performance. The **Overlap Threshold** slider on the [Custom Vision website](https://customvision.ai) deals with how much a predicted bounding box must overlap with the true bounding box to be considered correct.
@@ -96,9 +94,9 @@ When you use or test the model by submitting images to the prediction endpoint,
9694

9795
![screenshot of the predictions tab, with images in view](./media/getting-started-improving-your-classifier/predictions.png)
9896

99-
2. Hover over an image to see the tags that were predicted by the model. Images are sorted so that the ones which can bring the most improvements to the model are listed the top. To use a different sorting method, make a selection in the __Sort__ section.
97+
2. Hover over an image to see the tags that were predicted by the model. Images are sorted so that the ones that can bring the most improvements to the model are listed the top. To use a different sorting method, make a selection in the __Sort__ section.
10098

101-
To add an image to your existing training data, select the image, set the correct tag(s), and click __Save and close__. The image will be removed from __Predictions__ and added to the set of training images. You can view it by selecting the __Training Images__ tab.
99+
To add an image to your existing training data, select the image, set the correct tag(s), and select __Save and close__. The image will be removed from __Predictions__ and added to the set of training images. You can view it by selecting the __Training Images__ tab.
102100

103101
![Image of the tagging page](./media/getting-started-improving-your-classifier/tag.png)
104102

articles/cognitive-services/Custom-Vision-Service/limits-and-quotas.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: custom-vision
1111
ms.topic: conceptual
12-
ms.date: 05/13/2021
12+
ms.date: 07/05/2022
1313
ms.author: pafarley
1414
---
1515

@@ -43,4 +43,4 @@ The number of training images per project and tags per project are expected to i
4343

4444
> [!NOTE]
4545
> Images smaller than than 256 pixels will be accepted but upscaled.
46-
> Image aspect ratio should not be larger than 25
46+
> Image aspect ratio should not be larger than 25:1.

articles/cognitive-services/Custom-Vision-Service/test-your-model.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: custom-vision
1111
ms.topic: how-to
12-
ms.date: 10/27/2021
12+
ms.date: 07/05/2022
1313
ms.author: pafarley
1414
---
1515

@@ -23,7 +23,7 @@ After you train your Custom Vision model, you can quickly test it using a locall
2323

2424
![The Quick Test button is shown in the upper right corner of the window.](./media/test-your-model/quick-test-button.png)
2525

26-
2. In the **Quick Test** window, select in the **Submit Image** field and enter the URL of the image you want to use for your test. If you want to use a locally stored image instead, select the **Browse local files** button and select a local image file.
26+
1. In the **Quick Test** window, select in the **Submit Image** field and enter the URL of the image you want to use for your test. If you want to use a locally stored image instead, select the **Browse local files** button and select a local image file.
2727

2828
![Image of the submit image page](./media/test-your-model/submit-image.png)
2929

@@ -40,7 +40,7 @@ You can now take the image submitted previously for testing and use it to retrai
4040
> [!TIP]
4141
> The default view shows images from the current iteration. You can use the __Iteration__ drop down field to view images submitted during previous iterations.
4242
43-
2. Hover over an image to see the tags that were predicted by the classifier.
43+
1. Hover over an image to see the tags that were predicted by the classifier.
4444

4545
> [!TIP]
4646
> Images are ranked, so that the images that can bring the most gains to the classifier are at the top. To select a different sorting, use the __Sort__ section.
@@ -49,7 +49,7 @@ You can now take the image submitted previously for testing and use it to retrai
4949

5050
![Image of the tagging page](./media/test-your-model/tag-image.png)
5151

52-
3. Use the __Train__ button to retrain the classifier.
52+
1. Use the __Train__ button to retrain the classifier.
5353

5454
## Next steps
5555

0 commit comments

Comments
 (0)