Skip to content

Commit bb104b2

Browse files
committed
modify wordings and snippets to fit new sdk
1 parent b7a10d6 commit bb104b2

File tree

8 files changed

+151
-76
lines changed

8 files changed

+151
-76
lines changed

articles/ai-services/computer-vision/how-to/add-faces.md

Lines changed: 21 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ ms.custom: devx-track-csharp
1818

1919
[!INCLUDE [Gate notice](../includes/identity-gate-notice.md)]
2020

21-
This guide demonstrates how to add a large number of persons and faces to a **PersonGroup** object. The same strategy also applies to **LargePersonGroup**, **FaceList**, and **LargeFaceList** objects. This sample is written in C# and uses the Azure AI Face .NET client library.
21+
This guide demonstrates how to add a large number of persons and faces to a **PersonGroup** object. The same strategy also applies to **LargePersonGroup**, **FaceList**, and **LargeFaceList** objects. This sample is written in C#.
2222

2323
## Initialization
2424

@@ -57,10 +57,6 @@ static async Task WaitCallLimitPerSecondAsync()
5757
}
5858
```
5959

60-
## Authorize the API call
61-
62-
When you use the Face client library, the key and subscription endpoint are passed in through the constructor of the FaceClient class. See the [quickstart](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp&tabs=visual-studio) for instructions on creating a Face client object.
63-
6460

6561
## Create the PersonGroup
6662

@@ -70,21 +66,33 @@ This code creates a **PersonGroup** named `"MyPersonGroup"` to save the persons.
7066
const string personGroupId = "mypersongroupid";
7167
const string personGroupName = "MyPersonGroup";
7268
_timeStampQueue.Enqueue(DateTime.UtcNow);
73-
await faceClient.LargePersonGroup.CreateAsync(personGroupId, personGroupName);
69+
using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = personGroupName, ["recognitionModel"] = "recognition_04" }))))
70+
{
71+
content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
72+
await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}", content);
73+
}
7474
```
7575

7676
## Create the persons for the PersonGroup
7777

7878
This code creates **Persons** concurrently, and uses `await WaitCallLimitPerSecondAsync()` to avoid exceeding the call rate limit.
7979

8080
```csharp
81-
Person[] persons = new Person[PersonCount];
81+
string?[] persons = new string?[PersonCount];
8282
Parallel.For(0, PersonCount, async i =>
8383
{
8484
await WaitCallLimitPerSecondAsync();
8585

8686
string personName = $"PersonName#{i}";
87-
persons[i] = await faceClient.PersonGroupPerson.CreateAsync(personGroupId, personName);
87+
using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = personName }))))
88+
{
89+
content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
90+
using (var response = await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}/persons", content))
91+
{
92+
string contentString = await response.Content.ReadAsStringAsync();
93+
persons[i] = (string?)(JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString)?["personId"]);
94+
}
95+
}
8896
});
8997
```
9098

@@ -95,7 +103,6 @@ Faces added to different persons are processed concurrently. Faces added for one
95103
```csharp
96104
Parallel.For(0, PersonCount, async i =>
97105
{
98-
Guid personId = persons[i].PersonId;
99106
string personImageDir = @"/path/to/person/i/images";
100107

101108
foreach (string imagePath in Directory.GetFiles(personImageDir, "*.jpg"))
@@ -104,7 +111,11 @@ Parallel.For(0, PersonCount, async i =>
104111

105112
using (Stream stream = File.OpenRead(imagePath))
106113
{
107-
await faceClient.PersonGroupPerson.AddFaceFromStreamAsync(personGroupId, personId, stream);
114+
using (var content = new StreamContent(stream))
115+
{
116+
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
117+
await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}/persons/{persons[i]}/persistedfaces?detectionModel=detection_03", content);
118+
}
108119
}
109120
}
110121
});

articles/ai-services/computer-vision/how-to/mitigate-latency.md

Lines changed: 10 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,9 @@ We recommend that you select a region that is closest to your users to minimize
4444
The Face service provides two ways to upload images for processing: uploading the raw byte data of the image directly in the request, or providing a URL to a remote image. Regardless of the method, the Face service needs to download the image from its source location. If the connection from the Face service to the client or the remote server is slow or poor, it affects the response time of requests. If you have an issue with latency, consider storing the image in Azure Blob Storage and passing the image URL in the request. For more implementation details, see [storing the image in Azure Premium Blob Storage](../../../storage/blobs/storage-upload-process-images.md?tabs=dotnet). An example API call:
4545

4646
``` csharp
47-
var faces = await client.Face.DetectWithUrlAsync("https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name>");
47+
var url = "https://<storage_account_name>.blob.core.windows.net/<container_name>/<file_name>";
48+
var response = await faceClient.DetectAsync(new Uri(url), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: false);
49+
var faces = response.Value;
4850
```
4951

5052
Be sure to use a storage account in the same region as the Face resource. This reduces the latency of the connection between the Face service and the storage account.
@@ -67,7 +69,7 @@ To achieve the optimal balance between accuracy and speed, follow these tips to
6769
#### Other file size tips
6870

6971
Note the following additional tips:
70-
- For face detection, when using detection model `DetectionModel.Detection01`, reducing the image file size increases processing speed. When you use detection model `DetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080 pixels.
72+
- For face detection, when using detection model `FaceDetectionModel.Detection01`, reducing the image file size increases processing speed. When you use detection model `FaceDetectionModel.Detection02`, reducing the image file size will only increase processing speed if the image file is smaller than 1920x1080 pixels.
7173
- For face recognition, reducing the face size will only increase the speed if the image is smaller than 200x200 pixels.
7274
- The performance of the face detection methods also depends on how many faces are in an image. The Face service can return up to 100 faces for an image. Faces are ranked by face rectangle size from large to small.
7375

@@ -77,11 +79,13 @@ Note the following additional tips:
7779
If you need to call multiple APIs, consider calling them in parallel if your application design allows for it. For example, if you need to detect faces in two images to perform a face comparison, you can call them in an asynchronous task:
7880

7981
```csharp
80-
var faces_1 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1MzAyNzYzOTgxNTE0NTEz/john-f-kennedy---mini-biography.jpg");
81-
var faces_2 = client.Face.DetectWithUrlAsync("https://www.biography.com/.image/t_share/MTQ1NDY3OTIxMzExNzM3NjE3/john-f-kennedy---debating-richard-nixon.jpg");
82+
string url1 = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
83+
string url2 = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection2.jpg";
84+
var response1 = client.DetectAsync(new Uri(url1), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: false);
85+
var response2 = client.DetectAsync(new Uri(url2), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: false);
8286

83-
Task.WaitAll (new Task<IList<DetectedFace>>[] { faces_1, faces_2 });
84-
IEnumerable<DetectedFace> results = faces_1.Result.Concat (faces_2.Result);
87+
Task.WaitAll(new Task<Response<IReadOnlyList<FaceDetectionResult>>>[] { response1, response2 });
88+
IEnumerable<FaceDetectionResult> results = response1.Result.Value.Concat(response2.Result.Value);
8589
```
8690

8791
## Smooth over spiky traffic

articles/ai-services/computer-vision/how-to/specify-detection-model.md

Lines changed: 35 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -59,13 +59,14 @@ When you use the [Detect] API, you can assign the model version with the `detect
5959

6060
A request URL for the [Detect] REST API looks like this:
6161

62-
`https://westus.api.cognitive.microsoft.com/face/v1.0/detect[?returnFaceId][&returnFaceLandmarks][&returnFaceAttributes][&recognitionModel][&returnRecognitionModel][&detectionModel]&subscription-key=<Subscription key>`
62+
`https://westus.api.cognitive.microsoft.com/face/v1.0/detect?detectionModel={detectionModel}&recognitionModel={recognitionModel}&returnFaceId={returnFaceId}&returnFaceAttributes={returnFaceAttributes}&returnFaceLandmarks={returnFaceLandmarks}&returnRecognitionModel={returnRecognitionModel}&faceIdTimeToLive={faceIdTimeToLive}`
6363

6464
If you are using the client library, you can assign the value for `detectionModel` by passing in an appropriate string. If you leave it unassigned, the API uses the default model version (`detection_01`). See the following code example for the .NET client library.
6565

6666
```csharp
6767
string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
68-
var faces = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: false, returnFaceLandmarks: false, recognitionModel: "recognition_04", detectionModel: "detection_03");
68+
var response = await faceClient.DetectAsync(new Uri(imageUrl), FaceDetectionModel.Detection03, FaceRecognitionModel.Recognition04, returnFaceId: false, returnFaceLandmarks: false);
69+
var faces = response.Value;
6970
```
7071

7172
## Add face to Person with specified model
@@ -77,12 +78,29 @@ See the following code example for the .NET client library.
7778
```csharp
7879
// Create a PersonGroup and add a person with face detected by "detection_03" model
7980
string personGroupId = "mypersongroupid";
80-
await faceClient.PersonGroup.CreateAsync(personGroupId, "My Person Group Name", recognitionModel: "recognition_04");
81-
82-
string personId = (await faceClient.PersonGroupPerson.CreateAsync(personGroupId, "My Person Name")).PersonId;
81+
using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My Person Group Name", ["recognitionModel"] = "recognition_04" }))))
82+
{
83+
content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
84+
await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}", content);
85+
}
86+
87+
string? personId = null;
88+
using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My Person Name" }))))
89+
{
90+
content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
91+
using (var response = await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}/persons", content))
92+
{
93+
string contentString = await response.Content.ReadAsStringAsync();
94+
personId = (string?)(JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString)?["personId"]);
95+
}
96+
}
8397

8498
string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
85-
await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, personId, imageUrl, detectionModel: "detection_03");
99+
using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["url"] = imageUrl }))))
100+
{
101+
content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
102+
await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/persongroups/{personGroupId}/persons/{personId}/persistedfaces?detectionModel=detection_03", content);
103+
}
86104
```
87105

88106
This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Person** to it. Then it adds a Face to this **Person** using the `detection_03` model. If you don't specify the *detectionModel* parameter, the API uses the default model, `detection_01`.
@@ -95,10 +113,18 @@ This code creates a **PersonGroup** with ID `mypersongroupid` and adds a **Perso
95113
You can also specify a detection model when you add a face to an existing **FaceList** object. See the following code example for the .NET client library.
96114

97115
```csharp
98-
await faceClient.FaceList.CreateAsync(faceListId, "My face collection", recognitionModel: "recognition_04");
116+
using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = "My face collection", ["recognitionModel"] = "recognition_04" }))))
117+
{
118+
content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
119+
await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/facelists/{faceListId}", content);
120+
}
99121

100122
string imageUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/detection1.jpg";
101-
await client.FaceList.AddFaceFromUrlAsync(faceListId, imageUrl, detectionModel: "detection_03");
123+
using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["url"] = imageUrl }))))
124+
{
125+
content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
126+
await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/facelists/{faceListId}/persistedfaces?detectionModel=detection_03", content);
127+
}
102128
```
103129

104130
This code creates a **FaceList** called `My face collection` and adds a Face to it with the `detection_03` model. If you don't specify the *detectionModel* parameter, the API uses the default model, `detection_01`.
@@ -113,6 +139,7 @@ In this article, you learned how to specify the detection model to use with diff
113139

114140
* [Face .NET SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
115141
* [Face Python SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-python%253fpivots%253dprogramming-language-python)
142+
* [Face JavaScript SDK](../quickstarts-sdk/identity-client-library.md?pivots=programming-language-javascript%253fpivots%253dprogramming-language-javascript)
116143

117144
[Detect]: /rest/api/face/face-detection-operations/detect
118145
[Identify From Person Group]: /rest/api/face/face-recognition-operations/identify-from-person-group

0 commit comments

Comments
 (0)