Skip to content

Commit b7543ba

Browse files
authored
Merge pull request #116697 from PatrickFarley/freshness-pass
[cog serv] Freshness pass
2 parents ba4062e + 04a9235 commit b7543ba

File tree

12 files changed

+42
-39
lines changed

12 files changed

+42
-39
lines changed

articles/cognitive-services/Computer-vision/Home.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: computer-vision
1111
ms.topic: overview
12-
ms.date: 01/27/2020
12+
ms.date: 05/27/2020
1313
ms.author: pafarley
1414
ms.custom: seodec18
1515
#Customer intent: As a developer, I want to evaluate image processing functionality, so that I can determine if it will work for my information extraction or object detection scenarios.
@@ -19,17 +19,17 @@ ms.custom: seodec18
1919

2020
[!INCLUDE [TLS 1.2 enforcement](../../../includes/cognitive-services-tls-announcement.md)]
2121

22-
Azure's Computer Vision service provides developers with access to advanced algorithms that process images and return information, depending on the visual features you're interested in. For example, Computer Vision can determine if an image contains adult content, or it can find all of the human faces in an image.
22+
Azure's Computer Vision service provides developers with access to advanced algorithms that process images and return information based on the visual features you're interested in. For example, Computer Vision can determine whether an image contains adult content, find specific brands or objects, or find human faces.
2323

24-
You can use Computer Vision in your application through a native SDK or by invoking the REST API directly. This page broadly covers what you can do with Computer Vision.
24+
You can use Computer Vision in your application through a client library SDK or by calling the REST API directly. This page broadly covers what you can do with Computer Vision.
2525

2626
## Computer Vision for digital asset management
2727

2828
Computer Vision can power many digital asset management (DAM) scenarios. DAM is the business process of organizing, storing, and retrieving rich media assets and managing digital rights and permissions. For example, a company may want to group and identify images based on visible logos, faces, objects, colors, and so on. Or, you might want to automatically [generate captions for images](./Tutorials/storage-lab-tutorial.md) and attach keywords so they're searchable. For an all-in-one DAM solution using Cognitive Services, Azure Cognitive Search, and intelligent reporting, see the [Knowledge Mining Solution Accelerator Guide](https://github.com/Azure-Samples/azure-search-knowledge-mining) on GitHub. For other DAM examples, see the [Computer Vision Solution Templates](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates) repository.
2929

3030
## Analyze images for insight
3131

32-
You can analyze images to detect and provide insights about their visual features and characteristics. All of the features in the table below are provided by the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) API.
32+
You can analyze images to provide insights about their visual features and characteristics. All of the features in the table below are provided by the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) API.
3333

3434
| Action | Description |
3535
| ------ | ----------- |
@@ -47,7 +47,7 @@ You can analyze images to detect and provide insights about their visual feature
4747

4848
## Extract text from images
4949

50-
You can use Computer Vision [Read](concept-recognizing-text.md#read-api) API to extract printed and handwritten text from images into a machine-readable character stream. The Read API uses our latest models and works with text on a variety of surfaces and backgrounds, such as receipts, posters, business cards, letters, and whiteboards. Currently, English and Spanish are the only supported languages.
50+
You can use the Computer Vision [Read](concept-recognizing-text.md#read-api) API to extract printed and handwritten text from images into a machine-readable character stream. The Read API uses the latest models and works with text on a variety of surfaces and backgrounds, such as receipts, posters, business cards, letters, and whiteboards. It currently works for seven different languages (see [Language support](./language-support.md)).
5151

5252
You can also use the [optical character recognition (OCR)](concept-recognizing-text.md#ocr-optical-character-recognition-api) API to extract printed text in several languages. If needed, OCR corrects the rotation of the recognized text and provides the frame coordinates of each word. OCR supports 25 languages and automatically detects the language of the recognized text.
5353

articles/cognitive-services/Computer-vision/QuickStarts/go-analyze.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,13 +9,14 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: computer-vision
1111
ms.topic: quickstart
12-
ms.date: 01/27/2020
12+
ms.date: 05/27/2020
1313
ms.author: pafarley
1414
ms.custom: seodec18
1515
---
16+
1617
# Quickstart: Analyze a remote image using the Computer Vision REST API with Go
1718

18-
In this quickstart, you analyze a remotely stored image to extract visual features using the Computer Vision REST API. With the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) method, you can extract visual features based on image content.
19+
In this quickstart, you'll analyze a remotely stored image to extract visual features using the Computer Vision REST API. With the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) method, you can extract visual features based on image content.
1920

2021
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/ai/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=cognitive-services) before you begin.
2122

articles/cognitive-services/Content-Moderator/ecommerce-retail-catalog-moderation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: content-moderator
1111
ms.topic: tutorial
12-
ms.date: 01/27/2020
12+
ms.date: 05/27/2020
1313
ms.author: pafarley
1414

1515
#As a developer at an e-commerce company, I want to use machine learning to both categorize product images and tag objectionable images for further review by my team.
@@ -30,7 +30,7 @@ This tutorial shows you how to:
3030
3131
The complete sample code is available in the [Samples eCommerce Catalog Moderation](https://github.com/MicrosoftContentModerator/samples-eCommerceCatalogModeration) repository on GitHub.
3232

33-
If you dont have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
33+
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
3434

3535
## Prerequisites
3636

articles/cognitive-services/Content-Moderator/facebook-post-moderation.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: content-moderator
1111
ms.topic: tutorial
12-
ms.date: 01/27/2020
12+
ms.date: 05/27/2020
1313
ms.author: pafarley
1414
#Customer intent: As the moderator of a Facebook page, I want to use Azure's machine learning technology to automate and streamline the process of post moderation.
1515
---
@@ -67,14 +67,14 @@ Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps:
6767

6868
| App Setting name | value |
6969
| -------------------- |-------------|
70-
| cm:TeamId | Your Content Moderator TeamId |
71-
| cm:SubscriptionKey | Your Content Moderator subscription key - See [Credentials](review-tool-user-guide/credentials.md) |
72-
| cm:Region | Your Content Moderator region name, without the spaces. You can find this in the **Location** field of the **Overview** tab of your Azure resource.|
73-
| cm:ImageWorkflow | Name of the workflow to run on Images |
74-
| cm:TextWorkflow | Name of the workflow to run on Text |
75-
| cm:CallbackEndpoint | Url for the CMListener Function App that you will create later in this guide |
76-
| fb:VerificationToken | A secret token that you create, used to subscribe to the Facebook feed events |
77-
| fb:PageAccessToken | The Facebook graph api access token does not expire and allows the function Hide/Delete posts on your behalf. You will get this token at a later step. |
70+
| `cm:TeamId` | Your Content Moderator TeamId |
71+
| `cm:SubscriptionKey` | Your Content Moderator subscription key - See [Credentials](review-tool-user-guide/credentials.md) |
72+
| `cm:Region` | Your Content Moderator region name, without the spaces. You can find this name in the **Location** field of the **Overview** tab of your Azure resource.|
73+
| `cm:ImageWorkflow` | Name of the workflow to run on Images |
74+
| `cm:TextWorkflow` | Name of the workflow to run on Text |
75+
| `cm:CallbackEndpoint` | Url for the CMListener Function App that you will create later in this guide |
76+
| `fb:VerificationToken` | A secret token that you create, used to subscribe to the Facebook feed events |
77+
| `fb:PageAccessToken` | The Facebook graph api access token does not expire and allows the function Hide/Delete posts on your behalf. You will get this token at a later step. |
7878

7979
Click the **Save** button at the top of the page.
8080

articles/cognitive-services/Content-Moderator/includes/quickstarts/content-moderator-client-library-csharp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: content-moderator
1010
ms.topic: quickstart
11-
ms.date: 01/27/2020
11+
ms.date: 05/27/2020
1212
ms.author: pafarley
1313
---
1414

articles/cognitive-services/form-recognizer/quickstarts/curl-receipts.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,22 +8,22 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: forms-recognizer
1010
ms.topic: quickstart
11-
ms.date: 01/27/2020
11+
ms.date: 05/27/2020
1212
ms.author: pafarley
1313
#Customer intent: As a developer or data scientist familiar with cURL, I want to learn how to use a prebuilt Form Recognizer model to extract my receipt data.
1414
---
1515

1616
# Quickstart: Extract receipt data using the Form Recognizer REST API with cURL
1717

18-
In this quickstart, you'll use the Azure Form Recognizer REST API with cURL to extract and identify relevant information in USA sales receipts.
18+
In this quickstart, you'll use the Azure Form Recognizer REST API with cURL to extract and identify relevant information from USA sales receipts.
1919

2020
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
2121

2222
## Prerequisites
2323

2424
To complete this quickstart, you must have:
2525
- [cURL](https://curl.haxx.se/windows/) installed.
26-
- A URL for an image of a receipt. You can use a [sample image](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/contoso-allinone.jpg?raw=true) for this quickstart.
26+
- A URL for an image of a receipt. You can use a [sample image](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg) for this quickstart.
2727

2828
## Create a Form Recognizer resource
2929

articles/cognitive-services/form-recognizer/quickstarts/curl-train-extract.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: forms-recognizer
1010
ms.topic: quickstart
11-
ms.date: 01/27/2020
11+
ms.date: 05/27/2020
1212
ms.author: pafarley
1313
#Customer intent: As a developer or data scientist familiar with cURL, I want to learn how to use Form Recognizer to extract my form data.
1414
---

articles/cognitive-services/form-recognizer/quickstarts/python-labeled-data.md

Lines changed: 12 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: forms-recognizer
1010
ms.topic: quickstart
11-
ms.date: 02/19/2020
11+
ms.date: 05/27/2020
1212
ms.author: pafarley
1313

1414
---
@@ -23,23 +23,23 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
2323

2424
To complete this quickstart, you must have:
2525
- [Python](https://www.python.org/downloads/) installed (if you want to run the sample locally).
26-
- A set of at least six forms of the same type. You will use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) for this quickstart. Upload the training files to the root of a blob storage container in an Azure Storage account.
26+
- A set of at least six forms of the same type. You'll use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) for this quickstart. Upload the training files to the root of a blob storage container in an Azure Storage account.
2727

2828
## Create a Form Recognizer resource
2929

3030
[!INCLUDE [create resource](../includes/create-resource.md)]
3131

3232
## Set up training data
3333

34-
Next you'll need to set up the required input data. The labeled data feature has special input requirements beyond those needed to train a custom model.
34+
Next you'll need to set up the required input data. The labeled data feature has special input requirements beyond what's needed to train a custom model without labels.
3535

3636
Make sure all the training documents are of the same format. If you have forms in multiple formats, organize them into sub-folders based on common format. When you train, you'll need to direct the API to a sub-folder.
3737

3838
In order to train a model using labeled data, you'll need the following files as inputs in the sub-folder. You will learn how to create these file below.
3939

4040
* **Source forms** – the forms to extract data from. Supported types are JPEG, PNG, PDF, or TIFF.
41-
* **OCR layout files** - JSON files that describe the sizes and positions of all readable text in each source form. You'll use the Form Recognizer Layout API to generate this data.
42-
* **Label files** - JSON files that describe data labels which a user has entered manually.
41+
* **OCR layout files** - these are JSON files that describe the sizes and positions of all readable text in each source form. You'll use the Form Recognizer Layout API to generate this data.
42+
* **Label files** - these are JSON files that describe the data labels that a user has entered manually.
4343

4444
All of these files should occupy the same sub-folder and be in the following format:
4545

@@ -113,7 +113,7 @@ You need OCR result files in order for the service to consider the corresponding
113113

114114
### Create the label files
115115

116-
Label files contain key-value associations that a user has entered manually. They are needed for labeled data training, but not every source file needs to have a corresponding label file. Source files without labels will be treated as ordinary training documents. We recommend five or more labeled files for reliable training.
116+
Label files contain key-value associations that a user has entered manually. They are needed for labeled data training, but not every source file needs to have a corresponding label file. Source files without labels will be treated as ordinary training documents. We recommend five or more labeled files for reliable training. You can use a UI tool like the [sample labeling tool](./label-tool.md) to generate these files.
117117

118118
When you create a label file, you can optionally specify regions—exact positions of values on the document. This will give the training even higher accuracy. Regions are formatted as a set of eight values corresponding to four X,Y coordinates: top-left, top-right, bottom-right, and bottom-left. Coordinate values are between zero and one, scaled to the dimensions of the page.
119119

@@ -184,8 +184,8 @@ For each source form, the corresponding label file should have the original file
184184
...
185185
```
186186

187-
> [!NOTE]
188-
> You can only apply one label to each text element, and each label can only be applied once per page. You cannot currently apply a label across multiple pages.
187+
> [!IMPORTANT]
188+
> You can only apply one label to each text element, and each label can only be applied once per page. You cannot apply a label across multiple pages.
189189

190190

191191
## Train a model using labeled data
@@ -551,4 +551,7 @@ We understand this scenario is essential for our customers, and we are working o
551551

552552
## Next steps
553553

554-
In this quickstart, you learned how to use the Form Recognizer REST API with Python to train a model with manually labeled data. Next, see the [API reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/AnalyzeWithCustomForm) to explore the Form Recognizer API in more depth.
554+
In this quickstart, you learned how to use the Form Recognizer REST API with Python to train a model with manually labeled data. Next, see the API reference documentation to explore the Form Recognizer API in more depth.
555+
556+
> [!div class="nextstepaction"]
557+
> [REST API reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/AnalyzeReceiptAsync)

articles/cognitive-services/form-recognizer/quickstarts/python-layout.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: forms-recognizer
1010
ms.topic: quickstart
11-
ms.date: 02/19/2020
11+
ms.date: 05/27/2020
1212
ms.author: pafarley
1313
---
1414

articles/cognitive-services/form-recognizer/quickstarts/python-receipts.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: forms-recognizer
1010
ms.topic: quickstart
11-
ms.date: 01/27/2020
11+
ms.date: 05/27/2020
1212
ms.author: pafarley
1313
#Customer intent: As a developer or data scientist familiar with Python, I want to learn how to use a prebuilt Form Recognizer model to extract my receipt data.
1414
---
@@ -23,7 +23,7 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
2323

2424
To complete this quickstart, you must have:
2525
- [Python](https://www.python.org/downloads/) installed (if you want to run the sample locally).
26-
- A URL for an image of a receipt. You can use a [sample image](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/contoso-allinone.jpg?raw=true) for this quickstart.
26+
- A URL for an image of a receipt. You can use a [sample image](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg) for this quickstart.
2727

2828
## Create a Form Recognizer resource
2929

0 commit comments

Comments
 (0)