You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Computer-vision/concept-describing-images.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ manager: nitinme
9
9
ms.service: cognitive-services
10
10
ms.subservice: computer-vision
11
11
ms.topic: conceptual
12
-
ms.date: 11/03/2022
12
+
ms.date: 07/04/2023
13
13
ms.author: pafarley
14
14
ms.custom: seodec18, ignite-2022
15
15
---
@@ -58,7 +58,6 @@ The following JSON response illustrates what the Analyze API returns when descri
58
58
59
59
## Use the API
60
60
61
-
62
61
The image description feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Description` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"description"` section.
63
62
64
63
*[Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
Copy file name to clipboardExpand all lines: articles/cognitive-services/Computer-vision/concept-face-detection.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ manager: nitinme
9
9
ms.service: cognitive-services
10
10
ms.subservice: face-api
11
11
ms.topic: conceptual
12
-
ms.date: 12/27/2022
12
+
ms.date: 07/04/2023
13
13
ms.author: pafarley
14
14
---
15
15
@@ -31,7 +31,7 @@ Try out the capabilities of face detection quickly and easily using Vision Studi
31
31
32
32
## Face ID
33
33
34
-
The face ID is a unique identifier string for each detected face in an image. Note that Face ID requires limited access approval, which you can apply for by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
34
+
The face ID is a unique identifier string for each detected face in an image. Face ID requires limited access approval, which you can apply for by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
35
35
36
36
## Face landmarks
37
37
@@ -49,7 +49,7 @@ The Detection_03 model currently has the most accurate landmark detection. The e
49
49
50
50
Attributes are a set of features that can optionally be detected by the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API. The following attributes can be detected:
51
51
52
-
***Accessories**. Whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory.
52
+
***Accessories**. Indicates whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory.
53
53
***Age**. The estimated age in years of a particular face.
54
54
***Blur**. The blurriness of the face in the image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
55
55
***Emotion**. A list of emotions with their detection confidence for the given face. Confidence scores are normalized, and the scores across all emotions add up to one. The emotions returned are happiness, sadness, neutral, anger, contempt, disgust, surprise, and fear.
@@ -62,11 +62,11 @@ Attributes are a set of features that can optionally be detected by the [Face -
62
62
63
63

64
64
65
-
For more details on how to use these values, see the [Head pose how-to guide](./how-to/use-headpose.md).
66
-
***Makeup**. Whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup.
67
-
***Mask**. Whether the face is wearing a mask. This attribute returns a possible mask type, and a Boolean value to indicate whether nose and mouth are covered.
65
+
For more information on how to use these values, see the [Head pose how-to guide](./how-to/use-headpose.md).
66
+
***Makeup**. Indicates whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup.
67
+
***Mask**. Indicates whether the face is wearing a mask. This attribute returns a possible mask type, and a Boolean value to indicate whether nose and mouth are covered.
68
68
***Noise**. The visual noise detected in the face image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
69
-
***Occlusion**. Whether there are objects blocking parts of the face. This attribute returns a Boolean value for eyeOccluded, foreheadOccluded, and mouthOccluded.
69
+
***Occlusion**. Indicates whether there are objects blocking parts of the face. This attribute returns a Boolean value for eyeOccluded, foreheadOccluded, and mouthOccluded.
70
70
***Smile**. The smile expression of the given face. This value is between zero for no smile and one for a clear smile.
71
71
***QualityForRecognition** The overall image quality regarding whether the image being used in the detection is of sufficient quality to attempt face recognition on. The value is an informal rating of low, medium, or high. Only "high" quality images are recommended for person enrollment, and quality at or above "medium" is recommended for identification scenarios.
72
72
>[!NOTE]
@@ -81,7 +81,7 @@ Use the following tips to make sure that your input images give the most accurat
81
81
82
82
* The supported input image formats are JPEG, PNG, GIF (the first frame), BMP.
83
83
* The image file size should be no larger than 6 MB.
84
-
* The minimum detectable face size is 36 x 36 pixels in an image that is no larger than 1920 x 1080 pixels. Images with larger than 1920 x 1080 pixels have a proportionally larger minimum face size. Reducing the face size might cause some faces not to be detected, even if they are larger than the minimum detectable face size.
84
+
* The minimum detectable face size is 36 x 36 pixels in an image that is no larger than 1920 x 1080 pixels. Images with larger than 1920 x 1080 pixels have a proportionally larger minimum face size. Reducing the face size might cause some faces not to be detected, even if they're larger than the minimum detectable face size.
85
85
* The maximum detectable face size is 4096 x 4096 pixels.
86
86
* Faces outside the size range of 36 x 36 to 4096 x 4096 pixels will not be detected.
87
87
* Some faces might not be recognized because of technical challenges, such as:
@@ -93,9 +93,9 @@ Use the following tips to make sure that your input images give the most accurat
93
93
94
94
### Input data with orientation information:
95
95
96
-
Some input images with JPEG format might contain orientation information in Exchangeable image file format (Exif) metadata. If Exif orientation is available, images will be automatically rotated to the correct orientation before sending for face detection. The face rectangle, landmarks, and head pose for each detected face will be estimated based on the rotated image.
96
+
Some input images with JPEG format might contain orientation information in Exchangeable image file format (EXIF) metadata. If EXIF orientation is available, images are automatically rotated to the correct orientation before sending for face detection. The face rectangle, landmarks, and head pose for each detected face are estimated based on the rotated image.
97
97
98
-
To properly display the face rectangle and landmarks, you need to make sure the image is rotated correctly. Most of image visualization tools will auto-rotate the image according to its Exif orientation by default. For other tools, you might need to apply the rotation using your own code. The following examples show a face rectangle on a rotated image (left) and a non-rotated image (right).
98
+
To properly display the face rectangle and landmarks, you need to make sure the image is rotated correctly. Most of the image visualization tools automatically rotate the image according to its EXIF orientation by default. For other tools, you might need to apply the rotation using your own code. The following examples show a face rectangle on a rotated image (left) and a non-rotated image (right).
99
99
100
100

101
101
@@ -105,7 +105,7 @@ If you're detecting faces from a video feed, you may be able to improve performa
105
105
106
106
***Smoothing**: Many video cameras apply a smoothing effect. You should turn this off if you can because it creates a blur between frames and reduces clarity.
107
107
***Shutter Speed**: A faster shutter speed reduces the amount of motion between frames and makes each frame clearer. We recommend shutter speeds of 1/60 second or faster.
108
-
***Shutter Angle**: Some cameras specify shutter angle instead of shutter speed. You should use a lower shutter angle if possible. This will result in clearer video frames.
108
+
***Shutter Angle**: Some cameras specify shutter angle instead of shutter speed. You should use a lower shutter angle if possible. This results in clearer video frames.
109
109
110
110
>[!NOTE]
111
111
> A camera with a lower shutter angle will receive less light in each frame, so the image will be darker. You'll need to determine the right level to use.
Copy file name to clipboardExpand all lines: articles/cognitive-services/Computer-vision/concept-ocr.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.service: cognitive-services
10
10
ms.subservice: computer-vision
11
11
ms.custom: ignite-2022
12
12
ms.topic: conceptual
13
-
ms.date: 02/20/2023
13
+
ms.date: 07/04/2023
14
14
ms.author: pafarley
15
15
---
16
16
@@ -20,9 +20,9 @@ ms.author: pafarley
20
20
>
21
21
> For extracting text from PDF, Office, and HTML documents and document images, use the [Form Recognizer Read OCR model](../../applied-ai-services/form-recognizer/concept-read.md) optimized for text-heavy digital and scanned documents with an asynchronous API that makes it easy to power your intelligent document processing scenarios.
22
22
23
-
OCR traditionally started as a machine-learningbased technique for extracting text from in-the-wild and non-document images like product labels, usergenerated images, screenshots, street signs, and posters. For several scenarios that including running OCR on single images that are not text-heavy, you need a fast, synchronous API or service. This allows OCR to be embedded in near real-time user experiences to enrich content understanding and follow-up user actions with fast turn-around times.
23
+
OCR traditionally started as a machine-learning-based technique for extracting text from in-the-wild and non-document images like product labels, user-generated images, screenshots, street signs, and posters. For several scenarios, such as single images that aren't text-heavy, you need a fast, synchronous API or service. This allows OCR to be embedded in near real-time user experiences to enrich content understanding and follow-up user actions with fast turn-around times.
24
24
25
-
## What is Computer Vision v4.0 Read OCR (preview)
25
+
## What is Computer Vision v4.0 Read OCR (preview)?
26
26
27
27
The new Computer Vision Image Analysis 4.0 REST API offers the ability to extract printed or handwritten text from images in a unified performance-enhanced synchronous API that makes it easy to get all image insights including OCR results in a single API operation. The Read OCR engine is built on top of multiple deep learning models supported by universal script-based models for [global language support](./language-support.md).
Copy file name to clipboardExpand all lines: articles/cognitive-services/Computer-vision/includes/quickstarts-sdk/csharp-sdk.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,8 +25,8 @@ Use the OCR client library to read printed and handwritten text from a remote im
25
25
26
26
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
27
27
* The [Visual Studio IDE](https://visualstudio.microsoft.com/vs/) or current version of [.NET Core](https://dotnet.microsoft.com/download/dotnet-core).
28
-
* Once you have your Azure subscription, <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision"title="Create a Computer Vision resource"target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. After it deploys, click**Go to resource**.
29
-
* You will need the key and endpoint from the resource you create to connect your application to the Computer Vision service. You'll paste your key and endpoint into the code below later in the quickstart.
28
+
* Once you have your Azure subscription, <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision"title="Create a Computer Vision resource"target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. After it deploys, select**Go to resource**.
29
+
* You'll need the key and endpoint from the resource you create to connect your application to the Computer Vision service. Paste your key and endpoint into the code below later in the quickstart.
30
30
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
31
31
32
32
@@ -97,7 +97,7 @@ Use the OCR client library to read printed and handwritten text from a remote im
97
97
98
98
#### [Visual Studio IDE](#tab/visual-studio)
99
99
100
-
Click the **Debug** button at the top of the IDE window.
100
+
Select the **Debug** button at the top of the IDE window.
Copy file name to clipboardExpand all lines: articles/cognitive-services/Computer-vision/includes/quickstarts-sdk/identity-csharp-sdk.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,8 +20,8 @@ Get started with facial recognition using the Face client library for .NET. The
20
20
* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
21
21
* The [Visual Studio IDE](https://visualstudio.microsoft.com/vs/) or current version of [.NET Core](https://dotnet.microsoft.com/download/dotnet-core).
* Once you have your Azure subscription, <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesFace"title="Create a Face resource"target="_blank">create a Face resource </a> in the Azure portal to get your key and endpoint. After it deploys, click**Go to resource**.
24
-
* You will need the key and endpoint from the resource you create to connect your application to the Face API.
23
+
* Once you have your Azure subscription, <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesFace"title="Create a Face resource"target="_blank">create a Face resource </a> in the Azure portal to get your key and endpoint. After it deploys, select**Go to resource**.
24
+
* You'll need the key and endpoint from the resource you create to connect your application to the Face API.
25
25
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
* Once you have your Azure subscription, [Create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, click**Go to resource**.
24
-
* You will need the key and endpoint from the resource you create to connect your application to the Face API.
23
+
* Once you have your Azure subscription, [Create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select**Go to resource**.
24
+
* You'll need the key and endpoint from the resource you create to connect your application to the Face API.
25
25
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
26
26
27
27
@@ -45,13 +45,13 @@ Get started with facial recognition using the Face client library for JavaScript
45
45
npm init
46
46
```
47
47
48
-
1. Install the `ms-rest-azure` and `azure-cognitiveservices-face` NPM packages:
48
+
1. Install the `ms-rest-azure` and `azure-cognitiveservices-face` npm packages:
Copy file name to clipboardExpand all lines: articles/cognitive-services/Computer-vision/includes/quickstarts-sdk/identity-python-sdk.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,8 +21,8 @@ Get started with facial recognition using the Face client library for Python. Fo
21
21
*[Python 3.x](https://www.python.org/)
22
22
* Your Python installation should include [pip](https://pip.pypa.io/en/stable/). You can check if you have pip installed by running `pip --version` on the command line. Get pip by installing the latest version of Python.
* Once you have your Azure subscription, <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesFace"title="Create a Face resource"target="_blank">create a Face resource</a> in the Azure portal to get your key and endpoint. After it deploys, click**Go to resource**.
25
-
* You will need the key and endpoint from the resource you create to connect your application to the Face API.
24
+
* Once you have your Azure subscription, <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesFace"title="Create a Face resource"target="_blank">create a Face resource</a> in the Azure portal to get your key and endpoint. After it deploys, select**Go to resource**.
25
+
* You'll need the key and endpoint from the resource you create to connect your application to the Face API.
26
26
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
Copy file name to clipboardExpand all lines: articles/cognitive-services/Computer-vision/includes/quickstarts-sdk/image-analysis-cpp-sdk-40.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ Use the Image Analysis client SDK for C++ to analyze an image to read text and g
25
25
26
26
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
27
27
* For Windows development, the [Visual Studio IDE](https://visualstudio.microsoft.com/vs/) with workload **Desktop development with C++** enabled.
28
-
* Once you have your Azure subscription, <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision"title="Create a Computer Vision resource"target="_blank">create a Computer Vision resource</a> in the Azure portal. In order to use the captioning feature in this quickstart, you must create your resource in one of the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. After it deploys, click**Go to resource**.
28
+
* Once you have your Azure subscription, <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision"title="Create a Computer Vision resource"target="_blank">create a Computer Vision resource</a> in the Azure portal. In order to use the captioning feature in this quickstart, you must create your resource in one of the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. After it deploys, select**Go to resource**.
29
29
* You need the key and endpoint from the resource you create to connect your application to the Computer Vision service.
30
30
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
0 commit comments