Skip to content

Commit 11efd12

Browse files
Learn Build Service GitHub AppLearn Build Service GitHub App
authored andcommitted
Merging changes synced from https://github.com/MicrosoftDocs/azure-docs-pr (branch live)
2 parents 1aa2b91 + 5d17616 commit 11efd12

34 files changed

+136
-136
lines changed

articles/azure-arc/data/includes/azure-arc-data-preview-release.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,11 +10,11 @@ ms.date: 05/02/2023
1010
At this time, a test or preview build is not available for the next release.
1111
-->
1212

13-
July 2023 test release is now available.
13+
July 2023 preview release is now available.
1414

1515
|Component|Value|
1616
|-----------|-----------|
17-
|Container images registry/repository |`mcr.microsoft.com/arcdata/test`|
17+
|Container images registry/repository |`mcr.microsoft.com/arcdata/preview`|
1818
|Container images tag |`v1.21.0_2023-07-11`|
1919
|**CRD names and version:**| |
2020
|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1|

articles/cognitive-services/Computer-vision/concept-describing-images.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: computer-vision
1111
ms.topic: conceptual
12-
ms.date: 11/03/2022
12+
ms.date: 07/04/2023
1313
ms.author: pafarley
1414
ms.custom: seodec18, ignite-2022
1515
---
@@ -58,7 +58,6 @@ The following JSON response illustrates what the Analyze API returns when descri
5858

5959
## Use the API
6060

61-
6261
The image description feature is part of the [Analyze Image](https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/56f91f2e778daf14a499f21b) API. You can call this API through a native SDK or through REST calls. Include `Description` in the **visualFeatures** query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"description"` section.
6362

6463
* [Quickstart: Image Analysis REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)

articles/cognitive-services/Computer-vision/concept-face-detection.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ manager: nitinme
99
ms.service: cognitive-services
1010
ms.subservice: face-api
1111
ms.topic: conceptual
12-
ms.date: 12/27/2022
12+
ms.date: 07/04/2023
1313
ms.author: pafarley
1414
---
1515

@@ -31,7 +31,7 @@ Try out the capabilities of face detection quickly and easily using Vision Studi
3131
3232
## Face ID
3333

34-
The face ID is a unique identifier string for each detected face in an image. Note that Face ID requires limited access approval, which you can apply for by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
34+
The face ID is a unique identifier string for each detected face in an image. Face ID requires limited access approval, which you can apply for by filling out the [intake form](https://aka.ms/facerecognition). For more information, see the Face [limited access page](/legal/cognitive-services/computer-vision/limited-access-identity?context=%2Fazure%2Fcognitive-services%2Fcomputer-vision%2Fcontext%2Fcontext). You can request a face ID in your [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API call.
3535

3636
## Face landmarks
3737

@@ -49,7 +49,7 @@ The Detection_03 model currently has the most accurate landmark detection. The e
4949

5050
Attributes are a set of features that can optionally be detected by the [Face - Detect](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) API. The following attributes can be detected:
5151

52-
* **Accessories**. Whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory.
52+
* **Accessories**. Indicates whether the given face has accessories. This attribute returns possible accessories including headwear, glasses, and mask, with confidence score between zero and one for each accessory.
5353
* **Age**. The estimated age in years of a particular face.
5454
* **Blur**. The blurriness of the face in the image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
5555
* **Emotion**. A list of emotions with their detection confidence for the given face. Confidence scores are normalized, and the scores across all emotions add up to one. The emotions returned are happiness, sadness, neutral, anger, contempt, disgust, surprise, and fear.
@@ -62,11 +62,11 @@ Attributes are a set of features that can optionally be detected by the [Face -
6262

6363
![A head with the pitch, roll, and yaw axes labeled](./media/headpose.1.jpg)
6464

65-
For more details on how to use these values, see the [Head pose how-to guide](./how-to/use-headpose.md).
66-
* **Makeup**. Whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup.
67-
* **Mask**. Whether the face is wearing a mask. This attribute returns a possible mask type, and a Boolean value to indicate whether nose and mouth are covered.
65+
For more information on how to use these values, see the [Head pose how-to guide](./how-to/use-headpose.md).
66+
* **Makeup**. Indicates whether the face has makeup. This attribute returns a Boolean value for eyeMakeup and lipMakeup.
67+
* **Mask**. Indicates whether the face is wearing a mask. This attribute returns a possible mask type, and a Boolean value to indicate whether nose and mouth are covered.
6868
* **Noise**. The visual noise detected in the face image. This attribute returns a value between zero and one and an informal rating of low, medium, or high.
69-
* **Occlusion**. Whether there are objects blocking parts of the face. This attribute returns a Boolean value for eyeOccluded, foreheadOccluded, and mouthOccluded.
69+
* **Occlusion**. Indicates whether there are objects blocking parts of the face. This attribute returns a Boolean value for eyeOccluded, foreheadOccluded, and mouthOccluded.
7070
* **Smile**. The smile expression of the given face. This value is between zero for no smile and one for a clear smile.
7171
* **QualityForRecognition** The overall image quality regarding whether the image being used in the detection is of sufficient quality to attempt face recognition on. The value is an informal rating of low, medium, or high. Only "high" quality images are recommended for person enrollment, and quality at or above "medium" is recommended for identification scenarios.
7272
>[!NOTE]
@@ -81,7 +81,7 @@ Use the following tips to make sure that your input images give the most accurat
8181

8282
* The supported input image formats are JPEG, PNG, GIF (the first frame), BMP.
8383
* The image file size should be no larger than 6 MB.
84-
* The minimum detectable face size is 36 x 36 pixels in an image that is no larger than 1920 x 1080 pixels. Images with larger than 1920 x 1080 pixels have a proportionally larger minimum face size. Reducing the face size might cause some faces not to be detected, even if they are larger than the minimum detectable face size.
84+
* The minimum detectable face size is 36 x 36 pixels in an image that is no larger than 1920 x 1080 pixels. Images with larger than 1920 x 1080 pixels have a proportionally larger minimum face size. Reducing the face size might cause some faces not to be detected, even if they're larger than the minimum detectable face size.
8585
* The maximum detectable face size is 4096 x 4096 pixels.
8686
* Faces outside the size range of 36 x 36 to 4096 x 4096 pixels will not be detected.
8787
* Some faces might not be recognized because of technical challenges, such as:
@@ -93,9 +93,9 @@ Use the following tips to make sure that your input images give the most accurat
9393

9494
### Input data with orientation information:
9595

96-
Some input images with JPEG format might contain orientation information in Exchangeable image file format (Exif) metadata. If Exif orientation is available, images will be automatically rotated to the correct orientation before sending for face detection. The face rectangle, landmarks, and head pose for each detected face will be estimated based on the rotated image.
96+
Some input images with JPEG format might contain orientation information in Exchangeable image file format (EXIF) metadata. If EXIF orientation is available, images are automatically rotated to the correct orientation before sending for face detection. The face rectangle, landmarks, and head pose for each detected face are estimated based on the rotated image.
9797

98-
To properly display the face rectangle and landmarks, you need to make sure the image is rotated correctly. Most of image visualization tools will auto-rotate the image according to its Exif orientation by default. For other tools, you might need to apply the rotation using your own code. The following examples show a face rectangle on a rotated image (left) and a non-rotated image (right).
98+
To properly display the face rectangle and landmarks, you need to make sure the image is rotated correctly. Most of the image visualization tools automatically rotate the image according to its EXIF orientation by default. For other tools, you might need to apply the rotation using your own code. The following examples show a face rectangle on a rotated image (left) and a non-rotated image (right).
9999

100100
![Two face images with and without rotation](./media/image-rotation.png)
101101

@@ -105,7 +105,7 @@ If you're detecting faces from a video feed, you may be able to improve performa
105105

106106
* **Smoothing**: Many video cameras apply a smoothing effect. You should turn this off if you can because it creates a blur between frames and reduces clarity.
107107
* **Shutter Speed**: A faster shutter speed reduces the amount of motion between frames and makes each frame clearer. We recommend shutter speeds of 1/60 second or faster.
108-
* **Shutter Angle**: Some cameras specify shutter angle instead of shutter speed. You should use a lower shutter angle if possible. This will result in clearer video frames.
108+
* **Shutter Angle**: Some cameras specify shutter angle instead of shutter speed. You should use a lower shutter angle if possible. This results in clearer video frames.
109109

110110
>[!NOTE]
111111
> A camera with a lower shutter angle will receive less light in each frame, so the image will be darker. You'll need to determine the right level to use.

articles/cognitive-services/Computer-vision/concept-ocr.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.service: cognitive-services
1010
ms.subservice: computer-vision
1111
ms.custom: ignite-2022
1212
ms.topic: conceptual
13-
ms.date: 02/20/2023
13+
ms.date: 07/04/2023
1414
ms.author: pafarley
1515
---
1616

@@ -20,9 +20,9 @@ ms.author: pafarley
2020
>
2121
> For extracting text from PDF, Office, and HTML documents and document images, use the [Form Recognizer Read OCR model](../../applied-ai-services/form-recognizer/concept-read.md) optimized for text-heavy digital and scanned documents with an asynchronous API that makes it easy to power your intelligent document processing scenarios.
2222
23-
OCR traditionally started as a machine-learning based technique for extracting text from in-the-wild and non-document images like product labels, user generated images, screenshots, street signs, and posters. For several scenarios that including running OCR on single images that are not text-heavy, you need a fast, synchronous API or service. This allows OCR to be embedded in near real-time user experiences to enrich content understanding and follow-up user actions with fast turn-around times.
23+
OCR traditionally started as a machine-learning-based technique for extracting text from in-the-wild and non-document images like product labels, user-generated images, screenshots, street signs, and posters. For several scenarios, such as single images that aren't text-heavy, you need a fast, synchronous API or service. This allows OCR to be embedded in near real-time user experiences to enrich content understanding and follow-up user actions with fast turn-around times.
2424

25-
## What is Computer Vision v4.0 Read OCR (preview)
25+
## What is Computer Vision v4.0 Read OCR (preview)?
2626

2727
The new Computer Vision Image Analysis 4.0 REST API offers the ability to extract printed or handwritten text from images in a unified performance-enhanced synchronous API that makes it easy to get all image insights including OCR results in a single API operation. The Read OCR engine is built on top of multiple deep learning models supported by universal script-based models for [global language support](./language-support.md).
2828

articles/cognitive-services/Computer-vision/includes/quickstarts-sdk/csharp-sdk.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,8 @@ Use the OCR client library to read printed and handwritten text from a remote im
2525

2626
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
2727
* The [Visual Studio IDE](https://visualstudio.microsoft.com/vs/) or current version of [.NET Core](https://dotnet.microsoft.com/download/dotnet-core).
28-
* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
29-
* You will need the key and endpoint from the resource you create to connect your application to the Computer Vision service. You'll paste your key and endpoint into the code below later in the quickstart.
28+
* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource </a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
29+
* You'll need the key and endpoint from the resource you create to connect your application to the Computer Vision service. Paste your key and endpoint into the code below later in the quickstart.
3030
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
3131

3232

@@ -97,7 +97,7 @@ Use the OCR client library to read printed and handwritten text from a remote im
9797

9898
#### [Visual Studio IDE](#tab/visual-studio)
9999

100-
Click the **Debug** button at the top of the IDE window.
100+
Select the **Debug** button at the top of the IDE window.
101101

102102
#### [CLI](#tab/cli)
103103

articles/cognitive-services/Computer-vision/includes/quickstarts-sdk/identity-csharp-sdk.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,8 @@ Get started with facial recognition using the Face client library for .NET. The
2020
* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
2121
* The [Visual Studio IDE](https://visualstudio.microsoft.com/vs/) or current version of [.NET Core](https://dotnet.microsoft.com/download/dotnet-core).
2222
* [!INCLUDE [contributor-requirement](../../../includes/quickstarts/contributor-requirement.md)]
23-
* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFace" title="Create a Face resource" target="_blank">create a Face resource </a> in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
24-
* You will need the key and endpoint from the resource you create to connect your application to the Face API.
23+
* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFace" title="Create a Face resource" target="_blank">create a Face resource </a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
24+
* You'll need the key and endpoint from the resource you create to connect your application to the Face API.
2525
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
2626

2727

articles/cognitive-services/Computer-vision/includes/quickstarts-sdk/identity-javascript-sdk.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,8 @@ Get started with facial recognition using the Face client library for JavaScript
2020
* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
2121
* The latest version of [Node.js](https://nodejs.org/en/)
2222
* [!INCLUDE [contributor-requirement](../../../includes/quickstarts/contributor-requirement.md)]
23-
* Once you have your Azure subscription, [Create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
24-
* You will need the key and endpoint from the resource you create to connect your application to the Face API.
23+
* Once you have your Azure subscription, [Create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
24+
* You'll need the key and endpoint from the resource you create to connect your application to the Face API.
2525
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
2626

2727

@@ -45,13 +45,13 @@ Get started with facial recognition using the Face client library for JavaScript
4545
npm init
4646
```
4747

48-
1. Install the `ms-rest-azure` and `azure-cognitiveservices-face` NPM packages:
48+
1. Install the `ms-rest-azure` and `azure-cognitiveservices-face` npm packages:
4949

5050
```console
5151
npm install @azure/cognitiveservices-face @azure/ms-rest-js uuid
5252
```
5353

54-
Your app's `package.json` file will be updated with the dependencies.
54+
Your app's `package.json` file is updated with the dependencies.
5555

5656
1. Create a file named `index.js`, open it in a text editor, and paste in the following code:
5757

articles/cognitive-services/Computer-vision/includes/quickstarts-sdk/identity-python-sdk.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -21,8 +21,8 @@ Get started with facial recognition using the Face client library for Python. Fo
2121
* [Python 3.x](https://www.python.org/)
2222
* Your Python installation should include [pip](https://pip.pypa.io/en/stable/). You can check if you have pip installed by running `pip --version` on the command line. Get pip by installing the latest version of Python.
2323
* [!INCLUDE [contributor-requirement](../../../includes/quickstarts/contributor-requirement.md)]
24-
* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFace" title="Create a Face resource" target="_blank">create a Face resource</a> in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
25-
* You will need the key and endpoint from the resource you create to connect your application to the Face API.
24+
* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesFace" title="Create a Face resource" target="_blank">create a Face resource</a> in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
25+
* You'll need the key and endpoint from the resource you create to connect your application to the Face API.
2626
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
2727

2828

articles/cognitive-services/Computer-vision/includes/quickstarts-sdk/image-analysis-cpp-sdk-40.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ Use the Image Analysis client SDK for C++ to analyze an image to read text and g
2525

2626
* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
2727
* For Windows development, the [Visual Studio IDE](https://visualstudio.microsoft.com/vs/) with workload **Desktop development with C++** enabled.
28-
* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource</a> in the Azure portal. In order to use the captioning feature in this quickstart, you must create your resource in one of the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. After it deploys, click **Go to resource**.
28+
* Once you have your Azure subscription, <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision" title="Create a Computer Vision resource" target="_blank">create a Computer Vision resource</a> in the Azure portal. In order to use the captioning feature in this quickstart, you must create your resource in one of the following Azure regions: East US, France Central, Korea Central, North Europe, Southeast Asia, West Europe, West US. After it deploys, select **Go to resource**.
2929
* You need the key and endpoint from the resource you create to connect your application to the Computer Vision service.
3030
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
3131

0 commit comments

Comments
 (0)