Skip to content

Commit aacf56a

Browse files
committed
add typescript sample
1 parent 92a46e5 commit aacf56a

File tree

3 files changed

+398
-0
lines changed

3 files changed

+398
-0
lines changed
Lines changed: 390 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,390 @@
1+
---
2+
title: "Quickstart: Face identification with TypeScript"
3+
description: In this quickstart, get started using the Azure AI Face TypeScript SDK to detect and identify faces in images.
4+
author: PatrickFarley
5+
manager: nitinme
6+
ms.service: azure-ai-vision
7+
ms.subservice: azure-ai-face
8+
ms.custom:
9+
- ignite-2023
10+
ms.topic: include
11+
ms.date: 07/23/2025
12+
ms.author: pafarley
13+
---
14+
15+
[Reference documentation](https://aka.ms/azsdk-javascript-face-ref) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/face/ai-vision-face-rest) | [Package (npm)](https://www.npmjs.com/package/@azure-rest/ai-vision-face) | [Samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/face/ai-vision-face-rest/samples)
16+
17+
## Prerequisites
18+
19+
* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
20+
* [Node.js LTS](https://nodejs.org/)
21+
* [TypeScript](https://www.typescriptlang.org/)
22+
* [Visual Studio Code](https://code.visualstudio.com/)
23+
* Once you have your Azure subscription, [Create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
24+
* You'll need the key and endpoint from the resource you create to connect your application to the Face API.
25+
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
26+
27+
## Set up local development environment
28+
29+
1. Create a new directory for your project and navigate to it:
30+
31+
```console
32+
mkdir face-identification
33+
cd face-identification
34+
code .
35+
```
36+
37+
1. Create a new package for ESM modules in your project directory:
38+
39+
```console
40+
npm init -y
41+
npm pkg set type=module
42+
```
43+
44+
1. Install the required packages:
45+
46+
```console
47+
npm install @azure-rest/ai-vision-face
48+
```
49+
50+
1. Install development dependencies:
51+
52+
```console
53+
npm install typescript @types/node --save-dev
54+
```
55+
56+
1. Create a `tsconfig.json` file in your project directory:
57+
58+
```json
59+
{
60+
"compilerOptions": {
61+
"target": "es2022",
62+
"module": "esnext",
63+
"moduleResolution": "bundler",
64+
"rootDir": "./src",
65+
"outDir": "./dist/",
66+
"esModuleInterop": true,
67+
"forceConsistentCasingInFileNames": true,
68+
"strict": true,
69+
"skipLibCheck": true,
70+
"declaration": true,
71+
"sourceMap": true,
72+
"resolveJsonModule": true,
73+
"moduleDetection": "force",
74+
"allowSyntheticDefaultImports": true,
75+
"verbatimModuleSyntax": false
76+
},
77+
"include": [
78+
"src/**/*.ts"
79+
],
80+
"exclude": [
81+
"node_modules/**/*",
82+
"**/*.spec.ts"
83+
]
84+
}
85+
```
86+
87+
1. Update `package.json` to include a script for building TypeScript files:
88+
89+
```json
90+
"scripts": {
91+
"build": "tsc",
92+
"start": "node dist/index.js"
93+
}
94+
```
95+
96+
1. Create a `resources` folder and add sample images to it.
97+
98+
1. Create a `src` directory for your TypeScript code.
99+
100+
[!INCLUDE [create environment variables](../face-environment-variables.md)]
101+
102+
## Identify and verify faces
103+
104+
Create a new file in your `src` directory, `index.ts`, and paste in the following code. Replace the image paths and person group/person names as needed.
105+
106+
> [!NOTE]
107+
> If you haven't received access to the Face service using the [intake form](https://aka.ms/facerecognition), some of these functions won't work.
108+
109+
```typescript
110+
import { randomUUID } from "crypto";
111+
import { AzureKeyCredential } from "@azure/core-auth";
112+
import createFaceClient, {
113+
getLongRunningPoller,
114+
isUnexpected,
115+
} from "@azure-rest/ai-vision-face";
116+
import "dotenv/config";
117+
118+
/**
119+
* This sample demonstrates how to identify and verify faces using Azure Face API.
120+
*
121+
* @summary Face identification and verification.
122+
*/
123+
124+
const sleep = (ms: number) => new Promise((resolve) => setTimeout(resolve, ms));
125+
126+
const main = async () => {
127+
const endpoint = process.env["FACE_ENDPOINT"] ?? "<endpoint>";
128+
const apikey = process.env["FACE_APIKEY"] ?? "<apikey>";
129+
const credential = new AzureKeyCredential(apikey);
130+
const client = createFaceClient(endpoint, credential);
131+
132+
const imageBaseUrl =
133+
"https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/";
134+
const largePersonGroupId = randomUUID();
135+
136+
console.log("========IDENTIFY FACES========\n");
137+
138+
// Create a dictionary for all your images, grouping similar ones under the same key.
139+
const personDictionary: Record<string, string[]> = {
140+
"Family1-Dad": ["Family1-Dad1.jpg", "Family1-Dad2.jpg"],
141+
"Family1-Mom": ["Family1-Mom1.jpg", "Family1-Mom2.jpg"],
142+
"Family1-Son": ["Family1-Son1.jpg", "Family1-Son2.jpg"],
143+
};
144+
145+
// A group photo that includes some of the persons you seek to identify from your dictionary.
146+
const sourceImageFileName = "identification1.jpg";
147+
148+
// Create a large person group.
149+
console.log(`Creating a person group with ID: ${largePersonGroupId}`);
150+
const createGroupResponse = await client
151+
.path("/largepersongroups/{largePersonGroupId}", largePersonGroupId)
152+
.put({
153+
body: {
154+
name: largePersonGroupId,
155+
recognitionModel: "recognition_04",
156+
},
157+
});
158+
if (isUnexpected(createGroupResponse)) {
159+
throw new Error(createGroupResponse.body.error.message);
160+
}
161+
162+
// Add faces to person group.
163+
console.log("Adding faces to person group...");
164+
await Promise.all(
165+
Object.keys(personDictionary).map(async (name) => {
166+
console.log(`Create a persongroup person: ${name}`);
167+
const createPersonResponse = await client
168+
.path("/largepersongroups/{largePersonGroupId}/persons", largePersonGroupId)
169+
.post({
170+
body: { name },
171+
});
172+
if (isUnexpected(createPersonResponse)) {
173+
throw new Error(createPersonResponse.body.error.message);
174+
}
175+
const { personId } = createPersonResponse.body;
176+
177+
await Promise.all(
178+
personDictionary[name].map(async (similarImage) => {
179+
// Check if the image is of sufficient quality for recognition.
180+
const detectResponse = await client.path("/detect").post({
181+
contentType: "application/json",
182+
queryParameters: {
183+
detectionModel: "detection_03",
184+
recognitionModel: "recognition_04",
185+
returnFaceId: false,
186+
returnFaceAttributes: ["qualityForRecognition"],
187+
},
188+
body: { url: `${imageBaseUrl}${similarImage}` },
189+
});
190+
if (isUnexpected(detectResponse)) {
191+
throw new Error(detectResponse.body.error.message);
192+
}
193+
194+
const sufficientQuality = detectResponse.body.every(
195+
(face: any) => face.faceAttributes?.qualityForRecognition === "high"
196+
);
197+
if (!sufficientQuality || detectResponse.body.length !== 1) {
198+
return;
199+
}
200+
201+
// Quality is sufficient, add to group.
202+
console.log(
203+
`Add face to the person group person: (${name}) from image: (${similarImage})`
204+
);
205+
const addFaceResponse = await client
206+
.path(
207+
"/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces",
208+
largePersonGroupId,
209+
personId
210+
)
211+
.post({
212+
queryParameters: { detectionModel: "detection_03" },
213+
body: { url: `${imageBaseUrl}${similarImage}` },
214+
});
215+
if (isUnexpected(addFaceResponse)) {
216+
throw new Error(addFaceResponse.body.error.message);
217+
}
218+
})
219+
);
220+
})
221+
);
222+
console.log("Done adding faces to person group.");
223+
224+
// Train the large person group.
225+
console.log(`\nTraining person group: ${largePersonGroupId}`);
226+
const trainResponse = await client
227+
.path("/largepersongroups/{largePersonGroupId}/train", largePersonGroupId)
228+
.post();
229+
if (isUnexpected(trainResponse)) {
230+
throw new Error(trainResponse.body.error.message);
231+
}
232+
const poller = await getLongRunningPoller(client, trainResponse);
233+
await poller.pollUntilDone();
234+
console.log(`Training status: ${poller.getOperationState().status}`);
235+
if (poller.getOperationState().status !== "succeeded") {
236+
return;
237+
}
238+
239+
console.log("Pausing for 60 seconds to avoid triggering rate limit on free account...");
240+
await sleep(60000);
241+
242+
// Detect faces from source image url and only take those with sufficient quality for recognition.
243+
const detectSourceResponse = await client.path("/detect").post({
244+
contentType: "application/json",
245+
queryParameters: {
246+
detectionModel: "detection_03",
247+
recognitionModel: "recognition_04",
248+
returnFaceId: true,
249+
returnFaceAttributes: ["qualityForRecognition"],
250+
},
251+
body: { url: `${imageBaseUrl}${sourceImageFileName}` },
252+
});
253+
if (isUnexpected(detectSourceResponse)) {
254+
throw new Error(detectSourceResponse.body.error.message);
255+
}
256+
const faceIds = detectSourceResponse.body
257+
.filter((face: any) => face.faceAttributes?.qualityForRecognition !== "low")
258+
.map((face: any) => face.faceId);
259+
260+
// Identify the faces in a large person group.
261+
const identifyResponse = await client.path("/identify").post({
262+
body: { faceIds, largePersonGroupId },
263+
});
264+
if (isUnexpected(identifyResponse)) {
265+
throw new Error(identifyResponse.body.error.message);
266+
}
267+
268+
await Promise.all(
269+
identifyResponse.body.map(async (result: any) => {
270+
try {
271+
const candidate = result.candidates[0];
272+
if (!candidate) {
273+
console.log(`No persons identified for face with ID ${result.faceId}`);
274+
return;
275+
}
276+
const getPersonResponse = await client
277+
.path(
278+
"/largepersongroups/{largePersonGroupId}/persons/{personId}",
279+
largePersonGroupId,
280+
candidate.personId
281+
)
282+
.get();
283+
if (isUnexpected(getPersonResponse)) {
284+
throw new Error(getPersonResponse.body.error.message);
285+
}
286+
const person = getPersonResponse.body;
287+
console.log(
288+
`Person: ${person.name} is identified for face in: ${sourceImageFileName} with ID: ${result.faceId}. Confidence: ${candidate.confidence}`
289+
);
290+
291+
// Verification:
292+
const verifyResponse = await client.path("/verify").post({
293+
body: {
294+
faceId: result.faceId,
295+
largePersonGroupId,
296+
personId: person.personId,
297+
},
298+
});
299+
if (isUnexpected(verifyResponse)) {
300+
throw new Error(verifyResponse.body.error.message);
301+
}
302+
console.log(
303+
`Verification result between face ${result.faceId} and person ${person.personId}: ${verifyResponse.body.isIdentical} with confidence: ${verifyResponse.body.confidence}`
304+
);
305+
} catch (error: any) {
306+
console.log(
307+
`No persons identified for face with ID ${result.faceId}: ${error.message}`
308+
);
309+
}
310+
})
311+
);
312+
console.log();
313+
314+
// Delete large person group.
315+
console.log(`Deleting person group: ${largePersonGroupId}`);
316+
const deleteResponse = await client
317+
.path("/largepersongroups/{largePersonGroupId}", largePersonGroupId)
318+
.delete();
319+
if (isUnexpected(deleteResponse)) {
320+
throw new Error(deleteResponse.body.error.message);
321+
}
322+
console.log();
323+
324+
console.log("Done.");
325+
};
326+
327+
main().catch(console.error);
328+
```
329+
330+
## Build and run the sample
331+
332+
1. Compile the TypeScript code:
333+
334+
```console
335+
npm run build
336+
```
337+
338+
1. Run the compiled JavaScript:
339+
340+
```console
341+
npm run start
342+
```
343+
344+
## Output
345+
346+
```console
347+
========IDENTIFY FACES========
348+
349+
Creating a person group with ID: a230ac8b-09b2-4fa0-ae04-d76356d88d9f
350+
Adding faces to person group...
351+
Create a persongroup person: Family1-Dad
352+
Create a persongroup person: Family1-Mom
353+
Create a persongroup person: Family1-Son
354+
Add face to the person group person: (Family1-Dad) from image: (Family1-Dad1.jpg)
355+
Add face to the person group person: (Family1-Mom) from image: (Family1-Mom1.jpg)
356+
Add face to the person group person: (Family1-Son) from image: (Family1-Son1.jpg)
357+
Add face to the person group person: (Family1-Dad) from image: (Family1-Dad2.jpg)
358+
Add face to the person group person: (Family1-Mom) from image: (Family1-Mom2.jpg)
359+
Add face to the person group person: (Family1-Son) from image: (Family1-Son2.jpg)
360+
Done adding faces to person group.
361+
362+
Training person group: a230ac8b-09b2-4fa0-ae04-d76356d88d9f
363+
Training status: succeeded
364+
Pausing for 60 seconds to avoid triggering rate limit on free account...
365+
No persons identified for face with ID 56380623-8bf0-414a-b9d9-c2373386b7be
366+
Person: Family1-Dad is identified for face in: identification1.jpg with ID: c45052eb-a910-4fd3-b1c3-f91ccccc316a. Confidence: 0.96807
367+
Person: Family1-Son is identified for face in: identification1.jpg with ID: 8dce9b50-513f-4fe2-9e19-352acfd622b3. Confidence: 0.9281
368+
Person: Family1-Mom is identified for face in: identification1.jpg with ID: 75868da3-66f6-4b5f-a172-0b619f4d74c1. Confidence: 0.96902
369+
Verification result between face c45052eb-a910-4fd3-b1c3-f91ccccc316a and person 35a58d14-fd58-4146-9669-82ed664da357: true with confidence: 0.96807
370+
Verification result between face 8dce9b50-513f-4fe2-9e19-352acfd622b3 and person 2d4d196c-5349-431c-bf0c-f1d7aaa180ba: true with confidence: 0.9281
371+
Verification result between face 75868da3-66f6-4b5f-a172-0b619f4d74c1 and person 35d5de9e-5f92-4552-8907-0d0aac889c3e: true with confidence: 0.96902
372+
373+
Deleting person group: a230ac8b-09b2-4fa0-ae04-d76356d88d9f
374+
375+
Done.
376+
```
377+
378+
## Clean up resources
379+
380+
If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.
381+
382+
* [Azure portal](../../../multi-service-resource.md?pivots=azportal#clean-up-resources)
383+
* [Azure CLI](../../../multi-service-resource.md?pivots=azcli#clean-up-resources)
384+
385+
## Next steps
386+
387+
In this quickstart, you learned how to use the Face client library for TypeScript to do basic face identification. Next, learn about the different face detection models and how to specify the right model for your use case.
388+
389+
> [!div class="nextstepaction"]
390+
> [Specify a face detection model version](../../how-to/specify-detection-model.md)

0 commit comments

Comments
 (0)