Skip to content

Commit 2d84e15

Browse files
committed
acrolinx
1 parent fc3b965 commit 2d84e15

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

articles/ai-services/computer-vision/how-to/specify-detection-model.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: How to specify a detection model - Face
33
titleSuffix: Azure AI services
4-
description: This article will show you how to choose which face detection model to use with your Azure AI Face application.
4+
description: This article shows you how to choose which face detection model to use with your Azure AI Face application.
55
#services: cognitive-services
66
author: PatrickFarley
77
manager: nitinme
@@ -19,7 +19,7 @@ ms.custom: devx-track-csharp
1919

2020
This guide shows you how to specify a face detection model for the Azure AI Face service.
2121

22-
The Face service uses machine learning models to perform operations on human faces in images. We continue to improve the accuracy of our models based on customer feedback and advances in research, and we deliver these improvements as model updates. Developers have the option to specify which version of the face detection model they'd like to use; they can choose the model that best fits their use case.
22+
The Face service uses machine learning models to perform operations on human faces in images. We continue to improve the accuracy of our models based on customer feedback and advances in research, and we deliver these improvements as model updates. Developers can specify which version of the face detection model they'd like to use; they can choose the model that best fits their use case.
2323

2424
Read on to learn how to specify the face detection model in certain face operations. The Face service uses face detection whenever it converts an image of a face into some other form of data.
2525

@@ -41,7 +41,7 @@ The different face detection models are optimized for different tasks. See the f
4141
| Model | Description | Performance notes | Attributes | Landmarks |
4242
|---------|------------|-------------------|-------------|--|
4343
|**detection_01** | Default choice for all face detection operations. | Not optimized for small, side-view, or blurry faces. | Returns main face attributes (head pose, glasses, and so on) if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. |
44-
|**detection_02** | Released in May 2019 and available optionally in all face detection operations. | Improved accuracy on small, side-view, and blurry faces. | Does not return face attributes. | Does not return face landmarks. |
44+
|**detection_02** | Released in May 2019 and available optionally in all face detection operations. | Improved accuracy on small, side-view, and blurry faces. | Doesn't return face attributes. | Doesn't return face landmarks. |
4545
|**detection_03** | Released in February 2021 and available optionally in all face detection operations. | Further improved accuracy, including on smaller faces (64x64 pixels) and rotated face orientations. | Returns mask, blur, and head pose attributes if they're specified in the detect call. | Returns face landmarks if they're specified in the detect call. |
4646

4747

@@ -57,11 +57,11 @@ When you use the [Detect] API, you can assign the model version with the `detect
5757
* `detection_02`
5858
* `detection_03`
5959

60-
A request URL for the [Detect] REST API will look like this:
60+
A request URL for the [Detect] REST API looks like this:
6161

6262
`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel][&detectionModel]&subscription-key=<Subscription key>`
6363

64-
If you are using the client library, you can assign the value for `detectionModel` by passing in an appropriate string. If you leave it unassigned, the API will use the default model version (`detection_01`). See the following code example for the .NET client library.
64+
If you are using the client library, you can assign the value for `detectionModel` by passing in an appropriate string. If you leave it unassigned, the API uses the default model version (`detection_01`). See the following code example for the .NET client library.
6565

6666
```csharp
6767
string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
@@ -85,7 +85,7 @@ string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-ser
8585
await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, personId, imageUrl, detectionModel: "detection_03");
8686
```
8787

88-
This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Person** to it. Then it adds a Face to this **Person** using the `detection_03` model. If you don't specify the *detectionModel* parameter, the API will use the default model, `detection_01`.
88+
This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Person** to it. Then it adds a Face to this **Person** using the `detection_03` model. If you don't specify the *detectionModel* parameter, the API uses the default model, `detection_01`.
8989

9090
> [!NOTE]
9191
> You don't need to use the same detection model for all faces in a **Person** object, and you don't need to use the same detection model when detecting new faces to compare with a **Person** object (in the [Identify From Person Group] API, for example).
@@ -101,7 +101,7 @@ string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-ser
101101
await client.FaceList.AddFaceFromUrlAsync(faceListId, imageUrl, detectionModel: "detection_03");
102102
```
103103

104-
This code creates a **FaceList** called `My face collection` and adds a Face to it with the `detection_03` model. If you don't specify the *detectionModel* parameter, the API will use the default model, `detection_01`.
104+
This code creates a **FaceList** called `My face collection` and adds a Face to it with the `detection_03` model. If you don't specify the *detectionModel* parameter, the API uses the default model, `detection_01`.
105105

106106
> [!NOTE]
107107
> You don't need to use the same detection model for all faces in a **FaceList** object, and you don't need to use the same detection model when detecting new faces to compare with a **FaceList** object.

0 commit comments

Comments
 (0)