You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Computer-vision/Home.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ manager: nitinme
9
9
ms.service: cognitive-services
10
10
ms.subservice: computer-vision
11
11
ms.topic: overview
12
-
ms.date: 01/27/2020
12
+
ms.date: 05/27/2020
13
13
ms.author: pafarley
14
14
ms.custom: seodec18
15
15
#Customer intent: As a developer, I want to evaluate image processing functionality, so that I can determine if it will work for my information extraction or object detection scenarios.
Azure's Computer Vision service provides developers with access to advanced algorithms that process images and return information, depending on the visual features you're interested in. For example, Computer Vision can determine if an image contains adult content, or it can find all of the human faces in an image.
22
+
Azure's Computer Vision service provides developers with access to advanced algorithms that process images and return information based on the visual features you're interested in. For example, Computer Vision can determine whether an image contains adult content, find specific brands or objects, or find human faces.
23
23
24
-
You can use Computer Vision in your application through a native SDK or by invoking the REST API directly. This page broadly covers what you can do with Computer Vision.
24
+
You can use Computer Vision in your application through a client library SDK or by calling the REST API directly. This page broadly covers what you can do with Computer Vision.
25
25
26
26
## Computer Vision for digital asset management
27
27
28
28
Computer Vision can power many digital asset management (DAM) scenarios. DAM is the business process of organizing, storing, and retrieving rich media assets and managing digital rights and permissions. For example, a company may want to group and identify images based on visible logos, faces, objects, colors, and so on. Or, you might want to automatically [generate captions for images](./Tutorials/storage-lab-tutorial.md) and attach keywords so they're searchable. For an all-in-one DAM solution using Cognitive Services, Azure Cognitive Search, and intelligent reporting, see the [Knowledge Mining Solution Accelerator Guide](https://github.com/Azure-Samples/azure-search-knowledge-mining) on GitHub. For other DAM examples, see the [Computer Vision Solution Templates](https://github.com/Azure-Samples/Cognitive-Services-Vision-Solution-Templates) repository.
29
29
30
30
## Analyze images for insight
31
31
32
-
You can analyze images to detect and provide insights about their visual features and characteristics. All of the features in the table below are provided by the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) API.
32
+
You can analyze images to provide insights about their visual features and characteristics. All of the features in the table below are provided by the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) API.
33
33
34
34
| Action | Description |
35
35
| ------ | ----------- |
@@ -47,7 +47,7 @@ You can analyze images to detect and provide insights about their visual feature
47
47
48
48
## Extract text from images
49
49
50
-
You can use Computer Vision [Read](concept-recognizing-text.md#read-api) API to extract printed and handwritten text from images into a machine-readable character stream. The Read API uses our latest models and works with text on a variety of surfaces and backgrounds, such as receipts, posters, business cards, letters, and whiteboards. Currently, English and Spanish are the only supported languages.
50
+
You can use the Computer Vision [Read](concept-recognizing-text.md#read-api) API to extract printed and handwritten text from images into a machine-readable character stream. The Read API uses the latest models and works with text on a variety of surfaces and backgrounds, such as receipts, posters, business cards, letters, and whiteboards. It currently works for seven different languages (see [Language support](./language-support.md)).
51
51
52
52
You can also use the [optical character recognition (OCR)](concept-recognizing-text.md#ocr-optical-character-recognition-api) API to extract printed text in several languages. If needed, OCR corrects the rotation of the recognized text and provides the frame coordinates of each word. OCR supports 25 languages and automatically detects the language of the recognized text.
Copy file name to clipboardExpand all lines: articles/cognitive-services/Computer-vision/QuickStarts/go-analyze.md
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,13 +9,14 @@ manager: nitinme
9
9
ms.service: cognitive-services
10
10
ms.subservice: computer-vision
11
11
ms.topic: quickstart
12
-
ms.date: 01/27/2020
12
+
ms.date: 05/27/2020
13
13
ms.author: pafarley
14
14
ms.custom: seodec18
15
15
---
16
+
16
17
# Quickstart: Analyze a remote image using the Computer Vision REST API with Go
17
18
18
-
In this quickstart, you analyze a remotely stored image to extract visual features using the Computer Vision REST API. With the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) method, you can extract visual features based on image content.
19
+
In this quickstart, you'll analyze a remotely stored image to extract visual features using the Computer Vision REST API. With the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) method, you can extract visual features based on image content.
19
20
20
21
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/ai/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=cognitive-services) before you begin.
Copy file name to clipboardExpand all lines: articles/cognitive-services/Content-Moderator/ecommerce-retail-catalog-moderation.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ manager: nitinme
9
9
ms.service: cognitive-services
10
10
ms.subservice: content-moderator
11
11
ms.topic: tutorial
12
-
ms.date: 01/27/2020
12
+
ms.date: 05/27/2020
13
13
ms.author: pafarley
14
14
15
15
#As a developer at an e-commerce company, I want to use machine learning to both categorize product images and tag objectionable images for further review by my team.
@@ -30,7 +30,7 @@ This tutorial shows you how to:
30
30
31
31
The complete sample code is available in the [Samples eCommerce Catalog Moderation](https://github.com/MicrosoftContentModerator/samples-eCommerceCatalogModeration) repository on GitHub.
32
32
33
-
If you don’t have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
33
+
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
Copy file name to clipboardExpand all lines: articles/cognitive-services/Content-Moderator/facebook-post-moderation.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ manager: nitinme
9
9
ms.service: cognitive-services
10
10
ms.subservice: content-moderator
11
11
ms.topic: tutorial
12
-
ms.date: 01/27/2020
12
+
ms.date: 05/27/2020
13
13
ms.author: pafarley
14
14
#Customer intent: As the moderator of a Facebook page, I want to use Azure's machine learning technology to automate and streamline the process of post moderation.
15
15
---
@@ -67,14 +67,14 @@ Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps:
67
67
68
68
| App Setting name | value |
69
69
| -------------------- |-------------|
70
-
| cm:TeamId| Your Content Moderator TeamId |
71
-
| cm:SubscriptionKey| Your Content Moderator subscription key - See [Credentials](review-tool-user-guide/credentials.md)|
72
-
| cm:Region| Your Content Moderator region name, without the spaces. You can find this in the **Location** field of the **Overview** tab of your Azure resource.|
73
-
| cm:ImageWorkflow| Name of the workflow to run on Images |
74
-
| cm:TextWorkflow| Name of the workflow to run on Text |
75
-
| cm:CallbackEndpoint| Url for the CMListener Function App that you will create later in this guide |
76
-
| fb:VerificationToken| A secret token that you create, used to subscribe to the Facebook feed events |
77
-
| fb:PageAccessToken| The Facebook graph api access token does not expire and allows the function Hide/Delete posts on your behalf. You will get this token at a later step. |
70
+
|`cm:TeamId`| Your Content Moderator TeamId |
71
+
|`cm:SubscriptionKey`| Your Content Moderator subscription key - See [Credentials](review-tool-user-guide/credentials.md)|
72
+
|`cm:Region`| Your Content Moderator region name, without the spaces. You can find this name in the **Location** field of the **Overview** tab of your Azure resource.|
73
+
|`cm:ImageWorkflow`| Name of the workflow to run on Images |
74
+
|`cm:TextWorkflow`| Name of the workflow to run on Text |
75
+
|`cm:CallbackEndpoint`| Url for the CMListener Function App that you will create later in this guide |
76
+
|`fb:VerificationToken`| A secret token that you create, used to subscribe to the Facebook feed events |
77
+
|`fb:PageAccessToken`| The Facebook graph api access token does not expire and allows the function Hide/Delete posts on your behalf. You will get this token at a later step. |
Copy file name to clipboardExpand all lines: articles/cognitive-services/Content-Moderator/includes/quickstarts/content-moderator-client-library-csharp.md
Copy file name to clipboardExpand all lines: articles/cognitive-services/form-recognizer/quickstarts/curl-receipts.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,22 +8,22 @@ manager: nitinme
8
8
ms.service: cognitive-services
9
9
ms.subservice: forms-recognizer
10
10
ms.topic: quickstart
11
-
ms.date: 01/27/2020
11
+
ms.date: 05/27/2020
12
12
ms.author: pafarley
13
13
#Customer intent: As a developer or data scientist familiar with cURL, I want to learn how to use a prebuilt Form Recognizer model to extract my receipt data.
14
14
---
15
15
16
16
# Quickstart: Extract receipt data using the Form Recognizer REST API with cURL
17
17
18
-
In this quickstart, you'll use the Azure Form Recognizer REST API with cURL to extract and identify relevant information in USA sales receipts.
18
+
In this quickstart, you'll use the Azure Form Recognizer REST API with cURL to extract and identify relevant information from USA sales receipts.
19
19
20
20
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
21
21
22
22
## Prerequisites
23
23
24
24
To complete this quickstart, you must have:
25
25
-[cURL](https://curl.haxx.se/windows/) installed.
26
-
- A URL for an image of a receipt. You can use a [sample image](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/contoso-allinone.jpg?raw=true) for this quickstart.
26
+
- A URL for an image of a receipt. You can use a [sample image](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg) for this quickstart.
Copy file name to clipboardExpand all lines: articles/cognitive-services/form-recognizer/quickstarts/python-labeled-data.md
+12-9Lines changed: 12 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ manager: nitinme
8
8
ms.service: cognitive-services
9
9
ms.subservice: forms-recognizer
10
10
ms.topic: quickstart
11
-
ms.date: 02/19/2020
11
+
ms.date: 05/27/2020
12
12
ms.author: pafarley
13
13
14
14
---
@@ -23,23 +23,23 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
23
23
24
24
To complete this quickstart, you must have:
25
25
-[Python](https://www.python.org/downloads/) installed (if you want to run the sample locally).
26
-
- A set of at least six forms of the same type. You will use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) for this quickstart. Upload the training files to the root of a blob storage container in an Azure Storage account.
26
+
- A set of at least six forms of the same type. You'll use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) for this quickstart. Upload the training files to the root of a blob storage container in an Azure Storage account.
Next you'll need to set up the required input data. The labeled data feature has special input requirements beyond those needed to train a custom model.
34
+
Next you'll need to set up the required input data. The labeled data feature has special input requirements beyond what's needed to train a custom model without labels.
35
35
36
36
Make sure all the training documents are of the same format. If you have forms in multiple formats, organize them into sub-folders based on common format. When you train, you'll need to direct the API to a sub-folder.
37
37
38
38
In order to train a model using labeled data, you'll need the following files as inputs in the sub-folder. You will learn how to create these file below.
39
39
40
40
***Source forms** – the forms to extract data from. Supported types are JPEG, PNG, PDF, or TIFF.
41
-
***OCR layout files** - JSON files that describe the sizes and positions of all readable text in each source form. You'll use the Form Recognizer Layout API to generate this data.
42
-
***Label files** - JSON files that describe data labels which a user has entered manually.
41
+
***OCR layout files** - these are JSON files that describe the sizes and positions of all readable text in each source form. You'll use the Form Recognizer Layout API to generate this data.
42
+
***Label files** - these are JSON files that describe the data labels that a user has entered manually.
43
43
44
44
All of these files should occupy the same sub-folder and be in the following format:
45
45
@@ -113,7 +113,7 @@ You need OCR result files in order for the service to consider the corresponding
113
113
114
114
### Create the label files
115
115
116
-
Label files contain key-value associations that a user has entered manually. They are needed for labeled data training, but not every source file needs to have a corresponding label file. Source files without labels will be treated as ordinary training documents. We recommend five or more labeled files for reliable training.
116
+
Label files contain key-value associations that a user has entered manually. They are needed for labeled data training, but not every source file needs to have a corresponding label file. Source files without labels will be treated as ordinary training documents. We recommend five or more labeled files for reliable training. You can use a UI tool like the [sample labeling tool](./label-tool.md) to generate these files.
117
117
118
118
When you create a label file, you can optionally specify regions—exact positions of values on the document. This will give the training even higher accuracy. Regions are formatted as a set of eight values corresponding to four X,Y coordinates: top-left, top-right, bottom-right, and bottom-left. Coordinate values are between zero and one, scaled to the dimensions of the page.
119
119
@@ -184,8 +184,8 @@ For each source form, the corresponding label file should have the original file
184
184
...
185
185
```
186
186
187
-
> [!NOTE]
188
-
> You can only apply one label to each text element, and each label can only be applied once per page. You cannot currently apply a label across multiple pages.
187
+
> [!IMPORTANT]
188
+
> You can only apply one label to each text element, and each label can only be applied once per page. You cannot apply a label across multiple pages.
189
189
190
190
191
191
## Train a model using labeled data
@@ -551,4 +551,7 @@ We understand this scenario is essential for our customers, and we are working o
551
551
552
552
## Next steps
553
553
554
-
In this quickstart, you learned how to use the Form Recognizer REST API with Python to train a model with manually labeled data. Next, see the [API reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/AnalyzeWithCustomForm) to explore the Form Recognizer API in more depth.
554
+
In this quickstart, you learned how to use the Form Recognizer REST API with Python to train a model with manually labeled data. Next, see the API reference documentation to explore the Form Recognizer API in more depth.
555
+
556
+
> [!div class="nextstepaction"]
557
+
> [REST API reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-preview/operations/AnalyzeReceiptAsync)
Copy file name to clipboardExpand all lines: articles/cognitive-services/form-recognizer/quickstarts/python-receipts.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ manager: nitinme
8
8
ms.service: cognitive-services
9
9
ms.subservice: forms-recognizer
10
10
ms.topic: quickstart
11
-
ms.date: 01/27/2020
11
+
ms.date: 05/27/2020
12
12
ms.author: pafarley
13
13
#Customer intent: As a developer or data scientist familiar with Python, I want to learn how to use a prebuilt Form Recognizer model to extract my receipt data.
14
14
---
@@ -23,7 +23,7 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
23
23
24
24
To complete this quickstart, you must have:
25
25
-[Python](https://www.python.org/downloads/) installed (if you want to run the sample locally).
26
-
- A URL for an image of a receipt. You can use a [sample image](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/curl/form-recognizer/contoso-allinone.jpg?raw=true) for this quickstart.
26
+
- A URL for an image of a receipt. You can use a [sample image](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/contoso-allinone.jpg) for this quickstart.
0 commit comments