You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This guide demonstrates how to add a large number of persons and faces to a **PersonGroup** object. The same strategy also applies to **LargePersonGroup**, **FaceList**, and **LargeFaceList** objects. This sample is written in C# and uses the Azure AI Face .NET client library.
21
+
This guide demonstrates how to add a large number of persons and faces to a **PersonGroup** object. The same strategy also applies to **LargePersonGroup**, **FaceList**, and **LargeFaceList** objects. This sample is written in C#.
When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. See the [quickstart](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio) for instructions on creating a Face client object.
63
-
64
60
65
61
## Create the PersonGroup
66
62
@@ -70,21 +66,33 @@ This code creates a **PersonGroup** named `"MyPersonGroup"` to save the persons.
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/how-to/mitigate-latency.md
+10-6Lines changed: 10 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,7 +44,9 @@ We recommend that you select a region that is closest to your users to minimize
44
44
The Face service provides two ways to upload images for processing: uploading the raw byte data of the image directly in the request, or providing a URL to a remote image. Regardless of the method, the Face service needs to download the image from its source location. If the connection from the Face service to the client or the remote server is slow or poor, it affects the response time of requests. If you have an issue with latency, consider storing the image in Azure Blob Storage and passing the image URL in the request. For more implementation details, see [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). An example API call:
Be sure to use a storage account in the same region as the Face resource. This reduces the latency of the connection between the Face service and the storage account.
@@ -67,7 +69,7 @@ To achieve the optimal balance between accuracy and speed, follow these tips to
67
69
#### Other file size tips
68
70
69
71
Note the following additional tips:
70
-
- For face detection, when using detection model `DetectionModel.Detection01`, reducing the image file size increases processing speed. When you use detection model `DetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080 pixels.
72
+
- For face detection, when using detection model `FaceDetectionModel.Detection01`, reducing the image file size increases processing speed. When you use detection model `FaceDetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080 pixels.
71
73
- For face recognition, reducing the face size will only increase the speed if the image is smaller than 200x200 pixels.
72
74
- The performance of the face detection methods also depends on how many faces are in an image. The Face service can return up to 100 faces for an image. Faces are ranked by face rectangle size from large to small.
73
75
@@ -77,11 +79,13 @@ Note the following additional tips:
77
79
If you need to call multiple APIs, consider calling them in parallel if your application design allows for it. For example, if you need to detect faces in two images to perform a face comparison, you can call them in an asynchronous task:
If you are using the client library, you can assign the value for `detectionModel` by passing in an appropriate string. If you leave it unassigned, the API uses the default model version (`detection_01`). See the following code example for the .NET client library.
@@ -77,12 +78,29 @@ See the following code example for the .NET client library.
77
78
```csharp
78
79
// Create a PersonGroup and add a person with face detected by "detection_03" model
79
80
stringpersonGroupId="mypersongroupid";
80
-
awaitfaceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04");
81
-
82
-
stringpersonId= (awaitfaceClient.PersonGroupPerson.CreateAsync(personGroupId, "My Person Name")).PersonId;
81
+
using (varcontent=newByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(newDictionary<string, object> { ["name"] ="My Person Group Name", ["recognitionModel"] ="recognition_04" }))))
using (varcontent=newByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(newDictionary<string, object> { ["name"] ="My Person Name" }))))
This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Person** to it. Then it adds a Face to this **Person** using the `detection_03` model. If you don't specify the *detectionModel* parameter, the API uses the default model, `detection_01`.
@@ -95,10 +113,18 @@ This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Perso
95
113
You can also specify a detection model when you add a face to an existing **FaceList** object. See the following code example for the .NET client library.
96
114
97
115
```csharp
98
-
awaitfaceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04");
116
+
using (varcontent=newByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(newDictionary<string, object> { ["name"] ="My face collection", ["recognitionModel"] ="recognition_04" }))))
This code creates a **FaceList** called `My face collection` and adds a Face to it with the `detection_03` model. If you don't specify the *detectionModel* parameter, the API uses the default model, `detection_01`.
@@ -113,6 +139,7 @@ In this article, you learned how to specify the detection model to use with diff
0 commit comments