Skip to content

Commit 728eb9d

Browse files
committed
new detect sample for cs
1 parent c7a1a2b commit 728eb9d

File tree

1 file changed

+14
-14
lines changed

1 file changed

+14
-14
lines changed

articles/ai-services/computer-vision/how-to/identity-detect-faces.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -28,17 +28,17 @@ The code snippets in this guide are written in C# by using the Azure AI Face cli
2828

2929
## Setup
3030

31-
This guide assumes that you already constructed a [FaceClient](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceclient) object, named `faceClient`, using a Face key and endpoint URL. For instructions on how to set up this feature, follow one of the quickstarts.
31+
This guide assumes that you already constructed a [FaceClient](/dotnet/api/azure.ai.vision.face.faceclient) object, named `faceClient`, using a Face key and endpoint URL. For instructions on how to set up this feature, follow one of the quickstarts.
3232

3333
## Submit data to the service
3434

35-
To find faces and get their locations in an image, call the [DetectWithUrlAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithurlasync) or [DetectWithStreamAsync](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.faceoperationsextensions.detectwithstreamasync) method. **DetectWithUrlAsync** takes a URL string as input, and **DetectWithStreamAsync** takes the raw byte stream of an image as input.
35+
To find faces and get their locations in an image, call the [DetectAsync](/dotnet/api/azure.ai.vision.face.faceclient.detectasync). It takes either a URL string or the raw image binary as input.
3636

37-
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="basic1":::
37+
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/Detect.cs" id="basic1":::
3838

39-
The service returns a [DetectedFace](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface) object, which you can query for different kinds of information, specified below.
39+
The service returns a [FaceDetectionResult](/dotnet/api/azure.ai.vision.face.facedetectionresult) object, which you can query for different kinds of information, specified below.
4040

41-
For information on how to parse the location and dimensions of the face, see [FaceRectangle](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.facerectangle). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, you should expand the rectangle in each direction.
41+
For information on how to parse the location and dimensions of the face, see [FaceRectangle](/dotnet/api/azure.ai.vision.face.facedetectionresult.facerectangle). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, you should expand the rectangle in each direction.
4242

4343
## Determine how to process the data
4444

@@ -48,23 +48,23 @@ This guide focuses on the specifics of the Detect call, such as what arguments y
4848

4949
If you set the parameter _returnFaceId_ to `true` (approved customers only), you can get the unique ID for each face, which you can use in later face recognition tasks.
5050

51-
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="basic2":::
51+
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/Detect.cs" id="basic2":::
5252

5353
The optional _faceIdTimeToLive_ parameter specifies how long (in seconds) the face ID should be stored on the server. After this time expires, the face ID is removed. The default value is 86400 (24 hours).
5454

5555
### Get face landmarks
5656

57-
[Face landmarks](../concept-face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceLandmarks_ parameter to `true`.
57+
[Face landmarks](../concept-face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _detectionModel_ parameter to `FaceDetectionModel.Detection03` and the _returnFaceLandmarks_ parameter to `true`.
5858

59-
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="landmarks1":::
59+
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/Detect.cs" id="landmarks1":::
6060

6161
### Get face attributes
6262

6363
Besides face rectangles and landmarks, the face detection API can analyze several conceptual attributes of a face. For a full list, see the [Face attributes](../concept-face-detection.md#attributes) conceptual section.
6464

65-
To analyze face attributes, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceAttributes_ parameter to a list of [FaceAttributeType Enum](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.faceattributetype) values.
65+
To analyze face attributes, set the _detectionModel_ parameter to `FaceDetectionModel.Detection03` and the _returnFaceAttributes_ parameter to a list of [FaceAttributeType Enum](/dotnet/api/azure.ai.vision.face.faceattributetype) values.
6666

67-
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="attributes1":::
67+
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/Detect.cs" id="attributes1":::
6868

6969

7070
## Get results from the service
@@ -73,11 +73,11 @@ To analyze face attributes, set the _detectionModel_ parameter to `DetectionMode
7373

7474
The following code demonstrates how you might retrieve the locations of the nose and pupils:
7575

76-
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="landmarks2":::
76+
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/Detect.cs" id="landmarks2":::
7777

7878
You also can use face landmark data to accurately calculate the direction of the face. For example, you can define the rotation of the face as a vector from the center of the mouth to the center of the eyes. The following code calculates this vector:
7979

80-
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="direction":::
80+
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/Detect.cs" id="direction":::
8181

8282
When you know the direction of the face, you can rotate the rectangular face frame to align it more properly. To crop faces in an image, you can programmatically rotate the image so the faces always appear upright.
8383

@@ -86,7 +86,7 @@ When you know the direction of the face, you can rotate the rectangular face fra
8686

8787
The following code shows how you might retrieve the face attribute data that you requested in the original call.
8888

89-
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="attributes2":::
89+
:::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/Detect.cs" id="attributes2":::
9090

9191
To learn more about each of the attributes, see the [Face detection and attributes](../concept-face-detection.md) conceptual guide.
9292

@@ -99,4 +99,4 @@ In this guide, you learned how to use the various functionalities of face detect
9999
## Related articles
100100

101101
- [Reference documentation (REST)](/rest/api/face/operation-groups)
102-
- [Reference documentation (.NET SDK)](/dotnet/api/overview/azure/cognitiveservices/face-readme)
102+
- [Reference documentation (.NET SDK)](https://aka.ms/azsdk-csharp-face-ref)

0 commit comments

Comments
 (0)