Skip to content

Commit 9bee326

Browse files
Merge pull request #266351 from PatrickFarley/freshness
Freshness
2 parents 8640e16 + d43feb3 commit 9bee326

22 files changed

+164
-146
lines changed

articles/ai-services/computer-vision/Tutorials/build-enrollment-app.md

Lines changed: 13 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -10,28 +10,31 @@ ms.subservice: azure-ai-face
1010
ms.custom:
1111
- ignite-2023
1212
ms.topic: tutorial
13-
ms.date: 11/17/2020
13+
ms.date: 02/14/2024
1414
ms.author: pafarley
1515
---
1616

1717
# Build a React Native app to add users to a Face service
1818

19-
This guide will show you how to get started with the sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-accuracy face data. An integrated system could use an app like this to provide touchless access control, identification, attendance tracking, or personalization kiosk, based on their face data.
19+
This guide will show you how to get started with a sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-quality face data. An integrated system could use an app like this to provide touchless access control, identification, attendance tracking, or personalization kiosk, based on their face data.
2020

21-
When launched, the application shows users a detailed consent screen. If the user gives consent, the app prompts for a username and password and then captures a high-quality face image using the device's camera.
21+
When users launch the app, it shows a detailed consent screen. If the user gives consent, the app prompts them for a username and password and then captures a high-quality face image using the device's camera.
2222

23-
The sample app is written using JavaScript and the React Native framework. It can currently be deployed on Android and iOS devices; more deployment options are coming in the future.
23+
The sample app is written using JavaScript and the React Native framework. It can be deployed on Android and iOS devices.
2424

2525
## Prerequisites
2626

2727
* An Azure subscription – [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
2828
* Once you have your Azure subscription, [create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
2929
* You'll need the key and endpoint from the resource you created to connect your application to Face API.
3030

31-
### Important Security Considerations
32-
* For local development and initial limited testing, it is acceptable (although not best practice) to use environment variables to hold the API key and endpoint. For pilot and final deployments, the API key should be stored securely - which likely involves using an intermediate service to validate a user token generated during login.
33-
* Never store the API key or endpoint in code or commit them to a version control system (e.g. Git). If that happens by mistake, you should immediately generate a new API key/endpoint and revoke the previous ones.
34-
* As a best practice, consider having separate API keys for development and production.
31+
32+
> [!IMPORTANT]
33+
> **Security considerations**
34+
>
35+
> * For local development and initial limited testing, it is acceptable (although not best practice) to use environment variables to hold the API key and endpoint. For pilot and final deployments, the API key should be stored securely - which likely involves using an intermediate service to validate a user token generated during login.
36+
> * Never store the API key or endpoint in code or commit them to a version control system (e.g. Git). If that happens by mistake, you should immediately generate a new API key/endpoint and revoke the previous ones.
37+
> * As a best practice, consider having separate API keys for development and production.
3538
3639
## Set up the development environment
3740

@@ -63,7 +66,7 @@ The sample app is written using JavaScript and the React Native framework. It ca
6366

6467
## Customize the app for your business
6568

66-
Now that you have set up the sample app, you can tailor it to your own needs.
69+
Now that you've set up the sample app, you can tailor it to your own needs.
6770

6871
For example, you may want to add situation-specific information on your consent page:
6972

@@ -76,7 +79,7 @@ For example, you may want to add situation-specific information on your consent
7679
* Face size (faces that are distant from the camera)
7780
* Face orientation (faces turned or tilted away from camera)
7881
* Poor lighting conditions (either low light or backlighting) where the image may be poorly exposed or have too much noise
79-
* Occlusion (partially hidden or obstructed faces) including accessories like hats or thick-rimmed glasses)
82+
* Occlusion (partially hidden or obstructed faces), including accessories like hats or thick-rimmed glasses
8083
* Blur (such as by rapid face movement when the photograph was taken).
8184

8285
The service provides image quality checks to help you make the choice of whether the image is of sufficient quality based on the above factors to add the customer or attempt face recognition. This app demonstrates how to access frames from the device's camera, detect quality and show user interface messages to the user to help them capture a higher quality image, select the highest-quality frames, and add the detected face into the Face API service.

articles/ai-services/computer-vision/Tutorials/storage-lab-tutorial.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: PatrickFarley
77
manager: nitinme
88
ms.service: azure-ai-vision
99
ms.topic: tutorial
10-
ms.date: 12/29/2022
10+
ms.date: 02/14/2024
1111
ms.author: pafarley
1212
ms.devlang: csharp
1313
ms.custom: devx-track-csharp, build-2023, build-2023-dataai
@@ -411,38 +411,38 @@ Next, you'll add the code that actually uses the Azure AI Vision service to crea
411411
1. Open the *HomeController.cs* file in the project's **Controllers** folder and add the following `using` statements at the top of the file:
412412

413413
```csharp
414-
using Azure.AI.Vision.Common;
414+
using Azure;
415415
using Azure.AI.Vision.ImageAnalysis;
416+
using System;
416417
```
417418

418419
1. Then, go to the **Upload** method; this method converts and uploads images to blob storage. Add the following code immediately after the block that begins with `// Generate a thumbnail` (or at the end of your image-blob-creation process). This code takes the blob containing the image (`photo`), and uses Azure AI Vision to generate a description for that image. The Azure AI Vision API also generates a list of keywords that apply to the image. The generated description and keywords are stored in the blob's metadata so that they can be retrieved later on.
419420
420421
```csharp
421-
// Submit the image to the Azure AI Vision API
422-
var serviceOptions = new VisionServiceOptions(
423-
Environment.GetEnvironmentVariable(ConfigurationManager.AppSettings["VisionEndpoint"]),
422+
// create a new ImageAnalysisClient
423+
ImageAnalysisClient client = new ImageAnalysisClient(
424+
new Uri(Environment.GetEnvironmentVariable(ConfigurationManager.AppSettings["VisionEndpoint"])),
424425
new AzureKeyCredential(ConfigurationManager.AppSettings["SubscriptionKey"]));
425426

426-
var analysisOptions = new ImageAnalysisOptions()
427-
{
428-
Features = ImageAnalysisFeature.Caption | ImageAnalysisFeature.Tags,
429-
Language = "en",
430-
GenderNeutralCaption = true
431-
};
427+
VisualFeatures = visualFeatures = VisualFeatures.Caption | VisualFeatures.Tags;
432428

433-
using var imageSource = VisionSource.FromUrl(
434-
new Uri(photo.Uri.ToString()));
429+
ImageAnalysisOptions analysisOptions = new ImageAnalysisOptions()
430+
{
431+
GenderNeutralCaption = true,
432+
Language = "en",
433+
};
435434

436-
using var analyzer = new ImageAnalyzer(serviceOptions, imageSource, analysisOptions);
437-
var result = analyzer.Analyze();
435+
Uri imageURL = new Uri(photo.Uri.ToString());
436+
437+
ImageAnalysisResult result = client.Analyze(imageURL,visualFeatures,analysisOptions);
438438

439439
// Record the image description and tags in blob metadata
440-
photo.Metadata.Add("Caption", result.Caption.ContentCaption.Content);
440+
photo.Metadata.Add("Caption", result.Caption.Text);
441441

442-
for (int i = 0; i < result.Tags.ContentTags.Count; i++)
442+
for (int i = 0; i < result.Tags.Values.Count; i++)
443443
{
444444
string key = String.Format("Tag{0}", i);
445-
photo.Metadata.Add(key, result.Tags.ContentTags[i]);
445+
photo.Metadata.Add(key, result.Tags.Values[i]);
446446
}
447447

448448
await photo.SetMetadataAsync();
@@ -554,7 +554,7 @@ In this section, you will add a search box to the home page, enabling users to d
554554
}
555555
```
556556

557-
Observe that the **Index** method now accepts a parameter _id_ that contains the value the user typed into the search box. An empty or missing _id_ parameter indicates that all the photos should be displayed.
557+
Observe that the **Index** method now accepts a parameter `id` that contains the value the user typed into the search box. An empty or missing `id` parameter indicates that all the photos should be displayed.
558558

559559
1. Add the following helper method to the **HomeController** class:
560560

articles/ai-services/computer-vision/concept-face-detection.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ Attributes are a set of features that can optionally be detected by the [Face -
7070
>[!NOTE]
7171
> The availability of each attribute depends on the detection model specified. QualityForRecognition attribute also depends on the recognition model, as it is currently only available when using a combination of detection model detection_01 or detection_03, and recognition model recognition_03 or recognition_04.
7272
73-
## Input data
73+
## Input requirements
7474

7575
Use the following tips to make sure that your input images give the most accurate detection results:
7676

articles/ai-services/computer-vision/concept-face-recognition.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,13 +11,13 @@ ms.subservice: azure-ai-face
1111
ms.custom:
1212
- ignite-2023
1313
ms.topic: conceptual
14-
ms.date: 12/27/2022
14+
ms.date: 02/14/2024
1515
ms.author: pafarley
1616
---
1717

1818
# Face recognition
1919

20-
This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, face recognition is the act of verifying or identifying individuals by their faces. Face recognition is important in implementing the identification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be.
20+
This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, face recognition is the process of verifying or identifying individuals by their faces. Face recognition is important in implementing the identification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be.
2121

2222
You can try out the capabilities of face recognition quickly and easily using Vision Studio.
2323
> [!div class="nextstepaction"]
@@ -48,7 +48,7 @@ The recognition operations use mainly the following data structures. These objec
4848

4949
See the [Face recognition data structures](./concept-face-recognition-data-structures.md) guide.
5050

51-
## Input data
51+
## Input requirements
5252

5353
Use the following tips to ensure that your input images give the most accurate recognition results:
5454

articles/ai-services/computer-vision/concept-shelf-analysis.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 05/03/2023
11+
ms.date: 02/14/2024
1212
ms.author: pafarley
1313
ms.custom: references_regions, build-2023, build-2023-dataai
1414
---
@@ -35,13 +35,13 @@ Try out the capabilities of Product Recognition quickly and easily in your brows
3535

3636
## Product Recognition features
3737

38-
### Shelf Image Composition
38+
### Shelf image composition
3939

4040
The [stitching and rectification APIs](./how-to/shelf-modify-images.md) let you modify images to improve the accuracy of the Product Understanding results. You can use these APIs to:
4141
* Stitch together multiple images of a shelf to create a single image.
4242
* Rectify an image to remove perspective distortion.
4343

44-
### Shelf Product Recognition (pretrained model)
44+
### Shelf product recognition (pretrained model)
4545

4646
The [Product Understanding API](./how-to/shelf-analyze.md) lets you analyze a shelf image using the out-of-box pretrained model. This operation detects products and gaps in the shelf image and returns the bounding box coordinates of each product and gap, along with a confidence score for each.
4747

@@ -90,7 +90,7 @@ The following JSON response illustrates what the Product Understanding API retur
9090
}
9191
```
9292

93-
### Shelf Product Recognition - Custom (customized model)
93+
### Shelf product recognition (customized model)
9494

9595
The Product Understanding API can also be used with a [custom trained model](./how-to/shelf-model-customization.md) to detect your specific products. This operation returns the bounding box coordinates of each product and gap, along with the label of each product.
9696

@@ -139,7 +139,7 @@ The following JSON response illustrates what the Product Understanding API retur
139139
}
140140
```
141141

142-
### Shelf Planogram Compliance (preview)
142+
### Shelf planogram compliance
143143

144144
The [Planogram matching API](./how-to/shelf-planogram.md) lets you compare the results of the Product Understanding API to a planogram document. This operation matches each detected product and gap to its corresponding position in the planogram document.
145145

@@ -183,3 +183,4 @@ It returns a JSON response that accounts for each position in the planogram docu
183183
Get started with Product Recognition by trying out the stitching and rectification APIs. Then do basic analysis with the Product Understanding API.
184184
* [Prepare images for Product Recognition](./how-to/shelf-modify-images.md)
185185
* [Analyze a shelf image](./how-to/shelf-analyze.md)
186+
* [API reference](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/644aba14fb42681ae06f1b0b)

articles/ai-services/computer-vision/enrollment-overview.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,13 @@ manager: nitinme
88
ms.service: azure-ai-vision
99
ms.subservice: azure-ai-face
1010
ms.topic: best-practice
11-
ms.date: 09/27/2021
11+
ms.date: 02/14/2024
1212
ms.author: pafarley
1313
---
1414

1515
# Best practices for adding users to a Face service
1616

17-
In order to use the Azure AI Face API for face verification or identification, you need to enroll faces into a **LargePersonGroup** or similar data structure. This deep-dive demonstrates best practices for gathering meaningful consent from users and example logic to create high-quality enrollments that will optimize recognition accuracy.
17+
In order to use the Azure AI Face API for face verification or identification, you need to enroll faces into a **LargePersonGroup** or similar [data structure](/azure/ai-services/computer-vision/concept-face-recognition-data-structures). This deep-dive demonstrates best practices for gathering meaningful consent from users and example logic to create high-quality enrollments that will optimize recognition accuracy.
1818

1919
## Meaningful consent
2020

@@ -29,7 +29,7 @@ Based on Microsoft user research, Microsoft's Responsible AI principles, and [ex
2929

3030
This section offers guidance for developing an enrollment application for facial recognition. This guidance has been developed based on Microsoft user research in the context of enrolling individuals in facial recognition for building entry. Therefore, these recommendations might not apply to all facial recognition solutions. Responsible use for Face API depends strongly on the specific context in which it's integrated, so the prioritization and application of these recommendations should be adapted to your scenario.
3131

32-
> [!NOTE]
32+
> [!IMPORTANT]
3333
> It is your responsibility to align your enrollment application with applicable legal requirements in your jurisdiction and accurately reflect all of your data collection and processing practices.
3434
3535
## Application development
@@ -39,7 +39,7 @@ Before you design an enrollment flow, think about how the application you're bui
3939
|Category | Recommendations |
4040
|---|---|
4141
|Hardware | Consider the camera quality of the enrollment device. |
42-
|Recommended enrollment features | Include a log-on step with multi-factor authentication. </br></br>Link user information like an alias or identification number with their face template ID from the Face API (known as person ID). This mapping is necessary to retrieve and manage a user's enrollment. Note: person ID should be treated as a secret in the application.</br></br>Set up an automated process to delete all enrollment data, including the face templates and enrollment photos of people who are no longer users of facial recognition technology, such as former employees. </br></br>Avoid auto-enrollment, as it does not give the user the awareness, understanding, freedom of choice, or control that is recommended for obtaining consent. </br></br>Ask users for permission to save the images used for enrollment. This is useful when there is a model update since new enrollment photos will be required to re-enroll in the new model about every 10 months. If the original images aren't saved, users will need to go through the enrollment process from the beginning.</br></br>Allow users to opt out of storing photos in the system. To make the choice clearer, you can add a second consent request screen for saving the enrollment photos. </br></br>If photos are saved, create an automated process to re-enroll all users when there is a model update. Users who saved their enrollment photos will not have to enroll themselves again. </br></br>Create an app feature that allows designated administrators to override certain quality filters if a user has trouble enrolling. |
42+
|Recommended enrollment features | Include a log-on step with multifactor authentication. </br></br>Link user information like an alias or identification number with their face template ID from the Face API (known as person ID). This mapping is necessary to retrieve and manage a user's enrollment. Note: person ID should be treated as a secret in the application.</br></br>Set up an automated process to delete all enrollment data, including the face templates and enrollment photos of people who are no longer users of facial recognition technology, such as former employees. </br></br>Avoid auto-enrollment, as it does not give the user the awareness, understanding, freedom of choice, or control that is recommended for obtaining consent. </br></br>Ask users for permission to save the images used for enrollment. This is useful when there is a model update since new enrollment photos will be required to re-enroll in the new model about every 10 months. If the original images aren't saved, users will need to go through the enrollment process from the beginning.</br></br>Allow users to opt out of storing photos in the system. To make the choice clearer, you can add a second consent request screen for saving the enrollment photos. </br></br>If photos are saved, create an automated process to re-enroll all users when there is a model update. Users who saved their enrollment photos will not have to enroll themselves again. </br></br>Create an app feature that allows designated administrators to override certain quality filters if a user has trouble enrolling. |
4343
|Security | Azure AI services follow [best practices](../cognitive-services-virtual-networks.md?tabs=portal) for encrypting user data at rest and in transit. The following are other practices that can help uphold the security promises you make to users during the enrollment experience. </br></br>Take security measures to ensure that no one has access to the person ID at any point during enrollment. Note: PersonID should be treated as a secret in the enrollment system. </br></br>Use [role-based access control](../../role-based-access-control/overview.md) with Azure AI services. </br></br>Use token-based authentication and/or shared access signatures (SAS) over keys and secrets to access resources like databases. By using request or SAS tokens, you can grant limited access to data without compromising your account keys, and you can specify an expiry time on the token. </br></br>Never store any secrets, keys, or passwords in your app. |
4444
|User privacy |Provide a range of enrollment options to address different levels of privacy concerns. Do not mandate that people use their personal devices to enroll into a facial recognition system. </br></br>Allow users to re-enroll, revoke consent, and delete data from the enrollment application at any time and for any reason. |
4545
|Accessibility |Follow accessibility standards (for example, [ADA](https://www.ada.gov/regs2010/2010ADAStandards/2010ADAstandards.htm) or [W3C](https://www.w3.org/TR/WCAG21/)) to ensure the application is usable by people with mobility or visual impairments. |

0 commit comments

Comments
 (0)