|
| 1 | +--- |
| 2 | +title: "Quickstart: Face identification with TypeScript" |
| 3 | +description: In this quickstart, get started using the Azure AI Face TypeScript SDK to detect and identify faces in images. |
| 4 | +author: PatrickFarley |
| 5 | +manager: nitinme |
| 6 | +ms.service: azure-ai-vision |
| 7 | +ms.subservice: azure-ai-face |
| 8 | +ms.custom: |
| 9 | + - ignite-2023 |
| 10 | +ms.topic: include |
| 11 | +ms.date: 07/23/2025 |
| 12 | +ms.author: pafarley |
| 13 | +--- |
| 14 | + |
| 15 | +Get started with facial recognition using the Face client library for TypeScript. Follow these steps to install the package and try out the example code for basic tasks. The Face service provides you with access to advanced algorithms for detecting and recognizing human faces in images. Follow these steps to install the package and try out the example code for basic face identification using remote images. |
| 16 | + |
| 17 | + |
| 18 | + |
| 19 | +[Reference documentation](https://aka.ms/azsdk-javascript-face-ref) | [Library source code](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/face/ai-vision-face-rest) | [Package (npm)](https://www.npmjs.com/package/@azure-rest/ai-vision-face) | [Samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/face/ai-vision-face-rest/samples) |
| 20 | + |
| 21 | +## Prerequisites |
| 22 | + |
| 23 | +* Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/) |
| 24 | +* [Node.js LTS](https://nodejs.org/) |
| 25 | +* [TypeScript](https://www.typescriptlang.org/) |
| 26 | +* [Visual Studio Code](https://code.visualstudio.com/) |
| 27 | +* Once you have your Azure subscription, [Create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**. |
| 28 | + * You'll need the key and endpoint from the resource you create to connect your application to the Face API. |
| 29 | + * You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production. |
| 30 | + |
| 31 | +## Set up local development environment |
| 32 | + |
| 33 | +1. Create a new directory for your project and navigate to it: |
| 34 | + |
| 35 | + ```console |
| 36 | + mkdir face-identification |
| 37 | + cd face-identification |
| 38 | + code . |
| 39 | + ``` |
| 40 | + |
| 41 | +1. Create a new package for ESM modules in your project directory: |
| 42 | + |
| 43 | + ```console |
| 44 | + npm init -y |
| 45 | + npm pkg set type=module |
| 46 | + ``` |
| 47 | + |
| 48 | +1. Install the required packages: |
| 49 | + |
| 50 | + ```console |
| 51 | + npm install @azure-rest/ai-vision-face |
| 52 | + ``` |
| 53 | + |
| 54 | +1. Install development dependencies: |
| 55 | + |
| 56 | + ```console |
| 57 | + npm install typescript @types/node --save-dev |
| 58 | + ``` |
| 59 | + |
| 60 | +1. Create a `tsconfig.json` file in your project directory: |
| 61 | + |
| 62 | + ```json |
| 63 | + { |
| 64 | + "compilerOptions": { |
| 65 | + "target": "es2022", |
| 66 | + "module": "esnext", |
| 67 | + "moduleResolution": "bundler", |
| 68 | + "rootDir": "./src", |
| 69 | + "outDir": "./dist/", |
| 70 | + "esModuleInterop": true, |
| 71 | + "forceConsistentCasingInFileNames": true, |
| 72 | + "strict": true, |
| 73 | + "skipLibCheck": true, |
| 74 | + "declaration": true, |
| 75 | + "sourceMap": true, |
| 76 | + "resolveJsonModule": true, |
| 77 | + "moduleDetection": "force", |
| 78 | + "allowSyntheticDefaultImports": true, |
| 79 | + "verbatimModuleSyntax": false |
| 80 | + }, |
| 81 | + "include": [ |
| 82 | + "src/**/*.ts" |
| 83 | + ], |
| 84 | + "exclude": [ |
| 85 | + "node_modules/**/*", |
| 86 | + "**/*.spec.ts" |
| 87 | + ] |
| 88 | + } |
| 89 | + ``` |
| 90 | + |
| 91 | +1. Update `package.json` to include a script for building TypeScript files: |
| 92 | + |
| 93 | + ```json |
| 94 | + "scripts": { |
| 95 | + "build": "tsc", |
| 96 | + "start": "node dist/index.js" |
| 97 | + } |
| 98 | + ``` |
| 99 | + |
| 100 | +1. Create a `resources` folder and add sample images to it. |
| 101 | + |
| 102 | +1. Create a `src` directory for your TypeScript code. |
| 103 | + |
| 104 | +[!INCLUDE [create environment variables](../face-environment-variables.md)] |
| 105 | + |
| 106 | +## Identify and verify faces |
| 107 | + |
| 108 | +Create a new file in your `src` directory, `index.ts`, and paste in the following code. Replace the image paths and person group/person names as needed. |
| 109 | + |
| 110 | +> [!NOTE] |
| 111 | +> If you haven't received access to the Face service using the [intake form](https://aka.ms/facerecognition), some of these functions won't work. |
| 112 | +
|
| 113 | +```typescript |
| 114 | +import { randomUUID } from "crypto"; |
| 115 | +import { AzureKeyCredential } from "@azure/core-auth"; |
| 116 | +import createFaceClient, { |
| 117 | + getLongRunningPoller, |
| 118 | + isUnexpected, |
| 119 | +} from "@azure-rest/ai-vision-face"; |
| 120 | +import "dotenv/config"; |
| 121 | + |
| 122 | +/** |
| 123 | + * This sample demonstrates how to identify and verify faces using Azure Face API. |
| 124 | + * |
| 125 | + * @summary Face identification and verification. |
| 126 | + */ |
| 127 | + |
| 128 | +const sleep = (ms: number) => new Promise((resolve) => setTimeout(resolve, ms)); |
| 129 | + |
| 130 | +const main = async () => { |
| 131 | + const endpoint = process.env["FACE_ENDPOINT"] ?? "<endpoint>"; |
| 132 | + const apikey = process.env["FACE_APIKEY"] ?? "<apikey>"; |
| 133 | + const credential = new AzureKeyCredential(apikey); |
| 134 | + const client = createFaceClient(endpoint, credential); |
| 135 | + |
| 136 | + const imageBaseUrl = |
| 137 | + "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/"; |
| 138 | + const largePersonGroupId = randomUUID(); |
| 139 | + |
| 140 | + console.log("========IDENTIFY FACES========\n"); |
| 141 | + |
| 142 | + // Create a dictionary for all your images, grouping similar ones under the same key. |
| 143 | + const personDictionary: Record<string, string[]> = { |
| 144 | + "Family1-Dad": ["Family1-Dad1.jpg", "Family1-Dad2.jpg"], |
| 145 | + "Family1-Mom": ["Family1-Mom1.jpg", "Family1-Mom2.jpg"], |
| 146 | + "Family1-Son": ["Family1-Son1.jpg", "Family1-Son2.jpg"], |
| 147 | + }; |
| 148 | + |
| 149 | + // A group photo that includes some of the persons you seek to identify from your dictionary. |
| 150 | + const sourceImageFileName = "identification1.jpg"; |
| 151 | + |
| 152 | + // Create a large person group. |
| 153 | + console.log(`Creating a person group with ID: ${largePersonGroupId}`); |
| 154 | + const createGroupResponse = await client |
| 155 | + .path("/largepersongroups/{largePersonGroupId}", largePersonGroupId) |
| 156 | + .put({ |
| 157 | + body: { |
| 158 | + name: largePersonGroupId, |
| 159 | + recognitionModel: "recognition_04", |
| 160 | + }, |
| 161 | + }); |
| 162 | + if (isUnexpected(createGroupResponse)) { |
| 163 | + throw new Error(createGroupResponse.body.error.message); |
| 164 | + } |
| 165 | + |
| 166 | + // Add faces to person group. |
| 167 | + console.log("Adding faces to person group..."); |
| 168 | + await Promise.all( |
| 169 | + Object.keys(personDictionary).map(async (name) => { |
| 170 | + console.log(`Create a persongroup person: ${name}`); |
| 171 | + const createPersonResponse = await client |
| 172 | + .path("/largepersongroups/{largePersonGroupId}/persons", largePersonGroupId) |
| 173 | + .post({ |
| 174 | + body: { name }, |
| 175 | + }); |
| 176 | + if (isUnexpected(createPersonResponse)) { |
| 177 | + throw new Error(createPersonResponse.body.error.message); |
| 178 | + } |
| 179 | + const { personId } = createPersonResponse.body; |
| 180 | + |
| 181 | + await Promise.all( |
| 182 | + personDictionary[name].map(async (similarImage) => { |
| 183 | + // Check if the image is of sufficient quality for recognition. |
| 184 | + const detectResponse = await client.path("/detect").post({ |
| 185 | + contentType: "application/json", |
| 186 | + queryParameters: { |
| 187 | + detectionModel: "detection_03", |
| 188 | + recognitionModel: "recognition_04", |
| 189 | + returnFaceId: false, |
| 190 | + returnFaceAttributes: ["qualityForRecognition"], |
| 191 | + }, |
| 192 | + body: { url: `${imageBaseUrl}${similarImage}` }, |
| 193 | + }); |
| 194 | + if (isUnexpected(detectResponse)) { |
| 195 | + throw new Error(detectResponse.body.error.message); |
| 196 | + } |
| 197 | + |
| 198 | + const sufficientQuality = detectResponse.body.every( |
| 199 | + (face: any) => face.faceAttributes?.qualityForRecognition === "high" |
| 200 | + ); |
| 201 | + if (!sufficientQuality || detectResponse.body.length !== 1) { |
| 202 | + return; |
| 203 | + } |
| 204 | + |
| 205 | + // Quality is sufficient, add to group. |
| 206 | + console.log( |
| 207 | + `Add face to the person group person: (${name}) from image: (${similarImage})` |
| 208 | + ); |
| 209 | + const addFaceResponse = await client |
| 210 | + .path( |
| 211 | + "/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces", |
| 212 | + largePersonGroupId, |
| 213 | + personId |
| 214 | + ) |
| 215 | + .post({ |
| 216 | + queryParameters: { detectionModel: "detection_03" }, |
| 217 | + body: { url: `${imageBaseUrl}${similarImage}` }, |
| 218 | + }); |
| 219 | + if (isUnexpected(addFaceResponse)) { |
| 220 | + throw new Error(addFaceResponse.body.error.message); |
| 221 | + } |
| 222 | + }) |
| 223 | + ); |
| 224 | + }) |
| 225 | + ); |
| 226 | + console.log("Done adding faces to person group."); |
| 227 | + |
| 228 | + // Train the large person group. |
| 229 | + console.log(`\nTraining person group: ${largePersonGroupId}`); |
| 230 | + const trainResponse = await client |
| 231 | + .path("/largepersongroups/{largePersonGroupId}/train", largePersonGroupId) |
| 232 | + .post(); |
| 233 | + if (isUnexpected(trainResponse)) { |
| 234 | + throw new Error(trainResponse.body.error.message); |
| 235 | + } |
| 236 | + const poller = await getLongRunningPoller(client, trainResponse); |
| 237 | + await poller.pollUntilDone(); |
| 238 | + console.log(`Training status: ${poller.getOperationState().status}`); |
| 239 | + if (poller.getOperationState().status !== "succeeded") { |
| 240 | + return; |
| 241 | + } |
| 242 | + |
| 243 | + console.log("Pausing for 60 seconds to avoid triggering rate limit on free account..."); |
| 244 | + await sleep(60000); |
| 245 | + |
| 246 | + // Detect faces from source image url and only take those with sufficient quality for recognition. |
| 247 | + const detectSourceResponse = await client.path("/detect").post({ |
| 248 | + contentType: "application/json", |
| 249 | + queryParameters: { |
| 250 | + detectionModel: "detection_03", |
| 251 | + recognitionModel: "recognition_04", |
| 252 | + returnFaceId: true, |
| 253 | + returnFaceAttributes: ["qualityForRecognition"], |
| 254 | + }, |
| 255 | + body: { url: `${imageBaseUrl}${sourceImageFileName}` }, |
| 256 | + }); |
| 257 | + if (isUnexpected(detectSourceResponse)) { |
| 258 | + throw new Error(detectSourceResponse.body.error.message); |
| 259 | + } |
| 260 | + const faceIds = detectSourceResponse.body |
| 261 | + .filter((face: any) => face.faceAttributes?.qualityForRecognition !== "low") |
| 262 | + .map((face: any) => face.faceId); |
| 263 | + |
| 264 | + // Identify the faces in a large person group. |
| 265 | + const identifyResponse = await client.path("/identify").post({ |
| 266 | + body: { faceIds, largePersonGroupId }, |
| 267 | + }); |
| 268 | + if (isUnexpected(identifyResponse)) { |
| 269 | + throw new Error(identifyResponse.body.error.message); |
| 270 | + } |
| 271 | + |
| 272 | + await Promise.all( |
| 273 | + identifyResponse.body.map(async (result: any) => { |
| 274 | + try { |
| 275 | + const candidate = result.candidates[0]; |
| 276 | + if (!candidate) { |
| 277 | + console.log(`No persons identified for face with ID ${result.faceId}`); |
| 278 | + return; |
| 279 | + } |
| 280 | + const getPersonResponse = await client |
| 281 | + .path( |
| 282 | + "/largepersongroups/{largePersonGroupId}/persons/{personId}", |
| 283 | + largePersonGroupId, |
| 284 | + candidate.personId |
| 285 | + ) |
| 286 | + .get(); |
| 287 | + if (isUnexpected(getPersonResponse)) { |
| 288 | + throw new Error(getPersonResponse.body.error.message); |
| 289 | + } |
| 290 | + const person = getPersonResponse.body; |
| 291 | + console.log( |
| 292 | + `Person: ${person.name} is identified for face in: ${sourceImageFileName} with ID: ${result.faceId}. Confidence: ${candidate.confidence}` |
| 293 | + ); |
| 294 | + |
| 295 | + // Verification: |
| 296 | + const verifyResponse = await client.path("/verify").post({ |
| 297 | + body: { |
| 298 | + faceId: result.faceId, |
| 299 | + largePersonGroupId, |
| 300 | + personId: person.personId, |
| 301 | + }, |
| 302 | + }); |
| 303 | + if (isUnexpected(verifyResponse)) { |
| 304 | + throw new Error(verifyResponse.body.error.message); |
| 305 | + } |
| 306 | + console.log( |
| 307 | + `Verification result between face ${result.faceId} and person ${person.personId}: ${verifyResponse.body.isIdentical} with confidence: ${verifyResponse.body.confidence}` |
| 308 | + ); |
| 309 | + } catch (error: any) { |
| 310 | + console.log( |
| 311 | + `No persons identified for face with ID ${result.faceId}: ${error.message}` |
| 312 | + ); |
| 313 | + } |
| 314 | + }) |
| 315 | + ); |
| 316 | + console.log(); |
| 317 | + |
| 318 | + // Delete large person group. |
| 319 | + console.log(`Deleting person group: ${largePersonGroupId}`); |
| 320 | + const deleteResponse = await client |
| 321 | + .path("/largepersongroups/{largePersonGroupId}", largePersonGroupId) |
| 322 | + .delete(); |
| 323 | + if (isUnexpected(deleteResponse)) { |
| 324 | + throw new Error(deleteResponse.body.error.message); |
| 325 | + } |
| 326 | + console.log(); |
| 327 | + |
| 328 | + console.log("Done."); |
| 329 | +}; |
| 330 | + |
| 331 | +main().catch(console.error); |
| 332 | +``` |
| 333 | + |
| 334 | +## Build and run the sample |
| 335 | + |
| 336 | +1. Compile the TypeScript code: |
| 337 | + |
| 338 | + ```console |
| 339 | + npm run build |
| 340 | + ``` |
| 341 | + |
| 342 | +1. Run the compiled JavaScript: |
| 343 | + |
| 344 | + ```console |
| 345 | + npm run start |
| 346 | + ``` |
| 347 | + |
| 348 | +## Output |
| 349 | + |
| 350 | +```console |
| 351 | +========IDENTIFY FACES======== |
| 352 | + |
| 353 | +Creating a person group with ID: a230ac8b-09b2-4fa0-ae04-d76356d88d9f |
| 354 | +Adding faces to person group... |
| 355 | +Create a persongroup person: Family1-Dad |
| 356 | +Create a persongroup person: Family1-Mom |
| 357 | +Create a persongroup person: Family1-Son |
| 358 | +Add face to the person group person: (Family1-Dad) from image: (Family1-Dad1.jpg) |
| 359 | +Add face to the person group person: (Family1-Mom) from image: (Family1-Mom1.jpg) |
| 360 | +Add face to the person group person: (Family1-Son) from image: (Family1-Son1.jpg) |
| 361 | +Add face to the person group person: (Family1-Dad) from image: (Family1-Dad2.jpg) |
| 362 | +Add face to the person group person: (Family1-Mom) from image: (Family1-Mom2.jpg) |
| 363 | +Add face to the person group person: (Family1-Son) from image: (Family1-Son2.jpg) |
| 364 | +Done adding faces to person group. |
| 365 | + |
| 366 | +Training person group: a230ac8b-09b2-4fa0-ae04-d76356d88d9f |
| 367 | +Training status: succeeded |
| 368 | +Pausing for 60 seconds to avoid triggering rate limit on free account... |
| 369 | +No persons identified for face with ID 56380623-8bf0-414a-b9d9-c2373386b7be |
| 370 | +Person: Family1-Dad is identified for face in: identification1.jpg with ID: c45052eb-a910-4fd3-b1c3-f91ccccc316a. Confidence: 0.96807 |
| 371 | +Person: Family1-Son is identified for face in: identification1.jpg with ID: 8dce9b50-513f-4fe2-9e19-352acfd622b3. Confidence: 0.9281 |
| 372 | +Person: Family1-Mom is identified for face in: identification1.jpg with ID: 75868da3-66f6-4b5f-a172-0b619f4d74c1. Confidence: 0.96902 |
| 373 | +Verification result between face c45052eb-a910-4fd3-b1c3-f91ccccc316a and person 35a58d14-fd58-4146-9669-82ed664da357: true with confidence: 0.96807 |
| 374 | +Verification result between face 8dce9b50-513f-4fe2-9e19-352acfd622b3 and person 2d4d196c-5349-431c-bf0c-f1d7aaa180ba: true with confidence: 0.9281 |
| 375 | +Verification result between face 75868da3-66f6-4b5f-a172-0b619f4d74c1 and person 35d5de9e-5f92-4552-8907-0d0aac889c3e: true with confidence: 0.96902 |
| 376 | + |
| 377 | +Deleting person group: a230ac8b-09b2-4fa0-ae04-d76356d88d9f |
| 378 | + |
| 379 | +Done. |
| 380 | +``` |
| 381 | + |
| 382 | +## Clean up resources |
| 383 | + |
| 384 | +If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. |
| 385 | + |
| 386 | +* [Azure portal](../../../multi-service-resource.md?pivots=azportal#clean-up-resources) |
| 387 | +* [Azure CLI](../../../multi-service-resource.md?pivots=azcli#clean-up-resources) |
| 388 | + |
| 389 | +## Next steps |
| 390 | + |
| 391 | +In this quickstart, you learned how to use the Face client library for TypeScript to do basic face identification. Next, learn about the different face detection models and how to specify the right model for your use case. |
| 392 | + |
| 393 | +> [!div class="nextstepaction"] |
| 394 | +> [Specify a face detection model version](../../how-to/specify-detection-model.md) |
0 commit comments