Skip to content

Commit a2fecb1

Browse files
authored
Merge pull request #76432 from paulth1/cognitive-services-face-articles-batch1
edit pass: Cognitive services face articles batch1
2 parents 8b04eb7 + 2cb2b9c commit a2fecb1

File tree

4 files changed

+137
-132
lines changed

4 files changed

+137
-132
lines changed

articles/cognitive-services/Face/Face-API-How-to-Topics/HowtoDetectFacesinImage.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -15,29 +15,29 @@ ms.author: sbowles
1515

1616
# Get face detection data
1717

18-
This guide will demonstrate how to use face detection to extract attributes like gender, age, or pose from a given image. The code snippets in this guide are written in C# using the Face API client library, but the same functionality is available through the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236).
18+
This guide demonstrates how to use face detection to extract attributes like gender, age, or pose from a given image. The code snippets in this guide are written in C# by using the Azure Cognitive Services Face API client library. The same functionality is available through the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236).
1919

20-
This guide will show you how to:
20+
This guide shows you how to:
2121

2222
- Get the locations and dimensions of faces in an image.
23-
- Get the locations of various face landmarks (pupils, nose, mouth, and so on) in an image.
24-
- Guess the gender, age, and emotion, and other attributes of a detected face.
23+
- Get the locations of various face landmarks, such as pupils, nose, and mouth, in an image.
24+
- Guess the gender, age, emotion, and other attributes of a detected face.
2525

2626
## Setup
2727

28-
This guide assumes you have already constructed a **[FaceClient](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient?view=azure-dotnet)** object, named `faceClient`, with a Face subscription key and endpoint URL. From here, you can use the face detection feature by calling either **[DetectWithUrlAsync](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync?view=azure-dotnet)** (used in this guide) or **[DetectWithStreamAsync](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync?view=azure-dotnet)**. See the [Detect Faces quickstart for C#](../quickstarts/csharp-detect-sdk.md) for instructions on how to set this up.
28+
This guide assumes that you already constructed a [FaceClient](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient?view=azure-dotnet) object, named `faceClient`, with a Face subscription key and endpoint URL. From here, you can use the face detection feature by calling either [DetectWithUrlAsync](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync?view=azure-dotnet), which is used in this guide, or [DetectWithStreamAsync](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync?view=azure-dotnet). For instructions on how to set up this feature, see the [Detect faces quickstart for C#](../quickstarts/csharp-detect-sdk.md).
2929

30-
This guide will focus on the specifics of the Detect call—what arguments you can pass and what you can do with the returned data. We recommend only querying for the features you need, as each operation takes additional time to complete.
30+
This guide focuses on the specifics of the Detect call, such as what arguments you can pass and what you can do with the returned data. We recommend that you query for only the features you need. Each operation takes additional time to complete.
3131

3232
## Get basic face data
3333

34-
To find faces and get their locations in an image, call the method with the _returnFaceId_ parameter set to **true** (default).
34+
To find faces and get their locations in an image, call the method with the _returnFaceId_ parameter set to **true**. This setting is the default.
3535

3636
```csharp
3737
IList<DetectedFace> faces = await faceClient.Face.DetectWithUrlAsync(imageUrl, true, false, null);
3838
```
3939

40-
The returned **[DetectedFace](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface?view=azure-dotnet)** objects can be queried for their unique IDs and a rectangle which gives the pixel coordinates of the face.
40+
You can query the returned [DetectedFace](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface?view=azure-dotnet) objects for their unique IDs and a rectangle that gives the pixel coordinates of the face.
4141

4242
```csharp
4343
foreach (var face in faces)
@@ -47,17 +47,17 @@ foreach (var face in faces)
4747
}
4848
```
4949

50-
See **[FaceRectangle](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.facerectangle?view=azure-dotnet)** for information on how to parse the location and dimensions of the face. Usually, this rectangle contains the eyes, eyebrows, nose, and mouth; the top of head, ears, and chin are not necessarily included. If you intend to use the face rectangle to crop a complete head or mid-shot portrait (a photo ID type image), you may want to expand the rectangle by a certain margin in each direction.
50+
For information on how to parse the location and dimensions of the face, see [FaceRectangle](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.facerectangle?view=azure-dotnet). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, perhaps for a photo ID-type image, you can expand the rectangle in each direction.
5151

5252
## Get face landmarks
5353

54-
[Face landmarks](../concepts/face-detection.md#face-landmarks) are a set of easy-to-find points on a face such as the pupils or the tip of nose. You can get face landmark data by setting the _returnFaceLandmarks_ parameter to **true**.
54+
[Face landmarks](../concepts/face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _returnFaceLandmarks_ parameter to **true**.
5555

5656
```csharp
5757
IList<DetectedFace> faces = await faceClient.Face.DetectWithUrlAsync(imageUrl, true, true, null);
5858
```
5959

60-
The following code demonstrates how you might go on to retrieve the locations of the nose and pupils:
60+
The following code demonstrates how you might retrieve the locations of the nose and pupils:
6161

6262
```csharp
6363
foreach (var face in faces)
@@ -75,7 +75,7 @@ foreach (var face in faces)
7575
}
7676
```
7777

78-
Face landmarks data can also be used to accurately calculate the direction of the face. For example, we can define the rotation of the face as a vector from the center of the mouth to the center of the eyes. The code below calculates this vector:
78+
You also can use face landmarks data to accurately calculate the direction of the face. For example, you can define the rotation of the face as a vector from the center of the mouth to the center of the eyes. The following code calculates this vector:
7979

8080
```csharp
8181
var upperLipBottom = landmarks.UpperLipBottom;
@@ -97,13 +97,13 @@ Vector faceDirection = new Vector(
9797
centerOfTwoEyes.Y - centerOfMouth.Y);
9898
```
9999

100-
Knowing the direction of the face, you can then rotate the rectangular face frame to align it more properly. If you want to crop faces in an image, you can programmatically rotate the image so that the faces always appear upright.
100+
When you know the direction of the face, you can rotate the rectangular face frame to align it more properly. To crop faces in an image, you can programmatically rotate the image so that the faces always appear upright.
101101

102102
## Get face attributes
103103

104-
Besides face rectangles and landmarks, the face detection API can analyze several conceptual attributes of a face. See the [Face attributes](../concepts/face-detection.md#attributes) conceptual section for a full list.
104+
Besides face rectangles and landmarks, the face detection API can analyze several conceptual attributes of a face. For a full list, see the [Face attributes](../concepts/face-detection.md#attributes) conceptual section.
105105

106-
To analyze face attributes, set the _returnFaceAttributes_ parameter to a list of **[FaceAttributeType Enum](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.faceattributetype?view=azure-dotnet)** values.
106+
To analyze face attributes, set the _returnFaceAttributes_ parameter to a list of [FaceAttributeType Enum](https://docs.microsoft.com/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.faceattributetype?view=azure-dotnet) values.
107107

108108
```csharp
109109
var requiredFaceAttributes = new FaceAttributeType[] {
@@ -118,7 +118,7 @@ var requiredFaceAttributes = new FaceAttributeType[] {
118118
var faces = await faceClient.DetectWithUrlAsync(imageUrl, true, false, requiredFaceAttributes);
119119
```
120120

121-
Then, get references to the returned data and do further operations according to your needs.
121+
Then, get references to the returned data and do more operations according to your needs.
122122

123123
```csharp
124124
foreach (var face in faces)
@@ -134,11 +134,11 @@ foreach (var face in faces)
134134
}
135135
```
136136

137-
To learn more about each of the attributes, refer to the [Face detection and attributes](../concepts/face-detection.md) conceptual guide.
137+
To learn more about each of the attributes, see the [Face detection and attributes](../concepts/face-detection.md) conceptual guide.
138138

139139
## Next steps
140140

141-
In this guide you learned how to use the various functionalities of face detection. Next, integrate these features into your app by following an in-depth tutorial.
141+
In this guide, you learned how to use the various functionalities of face detection. Next, integrate these features into your app by following an in-depth tutorial.
142142

143143
- [Tutorial: Create a WPF app to display face data in an image](../Tutorials/FaceAPIinCSharpTutorial.md)
144144
- [Tutorial: Create an Android app to detect and frame faces in an image](../Tutorials/FaceAPIinJavaForAndroidTutorial.md)

0 commit comments

Comments
 (0)