Skip to content

Commit 89a4cf9

Browse files
authored
Merge pull request #266455 from MicrosoftDocs/main
02/15 PM Publishing
2 parents 9460527 + cdb06ff commit 89a4cf9

File tree

177 files changed

+1784
-3308
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

177 files changed

+1784
-3308
lines changed
109 KB
Loading
Loading
57.7 KB
Loading
57.8 KB
Loading
114 KB
Loading

articles/ai-services/computer-vision/Tutorials/build-enrollment-app.md

Lines changed: 13 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -10,28 +10,31 @@ ms.subservice: azure-ai-face
1010
ms.custom:
1111
- ignite-2023
1212
ms.topic: tutorial
13-
ms.date: 11/17/2020
13+
ms.date: 02/14/2024
1414
ms.author: pafarley
1515
---
1616

1717
# Build a React Native app to add users to a Face service
1818

19-
This guide will show you how to get started with the sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-accuracy face data. An integrated system could use an app like this to provide touchless access control, identification, attendance tracking, or personalization kiosk, based on their face data.
19+
This guide will show you how to get started with a sample Face enrollment application. The app demonstrates best practices for obtaining meaningful consent to add users into a face recognition service and acquire high-quality face data. An integrated system could use an app like this to provide touchless access control, identification, attendance tracking, or personalization kiosk, based on their face data.
2020

21-
When launched, the application shows users a detailed consent screen. If the user gives consent, the app prompts for a username and password and then captures a high-quality face image using the device's camera.
21+
When users launch the app, it shows a detailed consent screen. If the user gives consent, the app prompts them for a username and password and then captures a high-quality face image using the device's camera.
2222

23-
The sample app is written using JavaScript and the React Native framework. It can currently be deployed on Android and iOS devices; more deployment options are coming in the future.
23+
The sample app is written using JavaScript and the React Native framework. It can be deployed on Android and iOS devices.
2424

2525
## Prerequisites
2626

2727
* An Azure subscription – [Create one for free](https://azure.microsoft.com/free/cognitive-services/).
2828
* Once you have your Azure subscription, [create a Face resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFace) in the Azure portal to get your key and endpoint. After it deploys, select **Go to resource**.
2929
* You'll need the key and endpoint from the resource you created to connect your application to Face API.
3030

31-
### Important Security Considerations
32-
* For local development and initial limited testing, it is acceptable (although not best practice) to use environment variables to hold the API key and endpoint. For pilot and final deployments, the API key should be stored securely - which likely involves using an intermediate service to validate a user token generated during login.
33-
* Never store the API key or endpoint in code or commit them to a version control system (e.g. Git). If that happens by mistake, you should immediately generate a new API key/endpoint and revoke the previous ones.
34-
* As a best practice, consider having separate API keys for development and production.
31+
32+
> [!IMPORTANT]
33+
> **Security considerations**
34+
>
35+
> * For local development and initial limited testing, it is acceptable (although not best practice) to use environment variables to hold the API key and endpoint. For pilot and final deployments, the API key should be stored securely - which likely involves using an intermediate service to validate a user token generated during login.
36+
> * Never store the API key or endpoint in code or commit them to a version control system (e.g. Git). If that happens by mistake, you should immediately generate a new API key/endpoint and revoke the previous ones.
37+
> * As a best practice, consider having separate API keys for development and production.
3538
3639
## Set up the development environment
3740

@@ -63,7 +66,7 @@ The sample app is written using JavaScript and the React Native framework. It ca
6366

6467
## Customize the app for your business
6568

66-
Now that you have set up the sample app, you can tailor it to your own needs.
69+
Now that you've set up the sample app, you can tailor it to your own needs.
6770

6871
For example, you may want to add situation-specific information on your consent page:
6972

@@ -76,7 +79,7 @@ For example, you may want to add situation-specific information on your consent
7679
* Face size (faces that are distant from the camera)
7780
* Face orientation (faces turned or tilted away from camera)
7881
* Poor lighting conditions (either low light or backlighting) where the image may be poorly exposed or have too much noise
79-
* Occlusion (partially hidden or obstructed faces) including accessories like hats or thick-rimmed glasses)
82+
* Occlusion (partially hidden or obstructed faces), including accessories like hats or thick-rimmed glasses
8083
* Blur (such as by rapid face movement when the photograph was taken).
8184

8285
The service provides image quality checks to help you make the choice of whether the image is of sufficient quality based on the above factors to add the customer or attempt face recognition. This app demonstrates how to access frames from the device's camera, detect quality and show user interface messages to the user to help them capture a higher quality image, select the highest-quality frames, and add the detected face into the Face API service.

articles/ai-services/computer-vision/Tutorials/storage-lab-tutorial.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: PatrickFarley
77
manager: nitinme
88
ms.service: azure-ai-vision
99
ms.topic: tutorial
10-
ms.date: 12/29/2022
10+
ms.date: 02/14/2024
1111
ms.author: pafarley
1212
ms.devlang: csharp
1313
ms.custom: devx-track-csharp, build-2023, build-2023-dataai
@@ -411,38 +411,38 @@ Next, you'll add the code that actually uses the Azure AI Vision service to crea
411411
1. Open the *HomeController.cs* file in the project's **Controllers** folder and add the following `using` statements at the top of the file:
412412

413413
```csharp
414-
using Azure.AI.Vision.Common;
414+
using Azure;
415415
using Azure.AI.Vision.ImageAnalysis;
416+
using System;
416417
```
417418

418419
1. Then, go to the **Upload** method; this method converts and uploads images to blob storage. Add the following code immediately after the block that begins with `// Generate a thumbnail` (or at the end of your image-blob-creation process). This code takes the blob containing the image (`photo`), and uses Azure AI Vision to generate a description for that image. The Azure AI Vision API also generates a list of keywords that apply to the image. The generated description and keywords are stored in the blob's metadata so that they can be retrieved later on.
419420
420421
```csharp
421-
// Submit the image to the Azure AI Vision API
422-
var serviceOptions = new VisionServiceOptions(
423-
Environment.GetEnvironmentVariable(ConfigurationManager.AppSettings["VisionEndpoint"]),
422+
// create a new ImageAnalysisClient
423+
ImageAnalysisClient client = new ImageAnalysisClient(
424+
new Uri(Environment.GetEnvironmentVariable(ConfigurationManager.AppSettings["VisionEndpoint"])),
424425
new AzureKeyCredential(ConfigurationManager.AppSettings["SubscriptionKey"]));
425426

426-
var analysisOptions = new ImageAnalysisOptions()
427-
{
428-
Features = ImageAnalysisFeature.Caption | ImageAnalysisFeature.Tags,
429-
Language = "en",
430-
GenderNeutralCaption = true
431-
};
427+
VisualFeatures = visualFeatures = VisualFeatures.Caption | VisualFeatures.Tags;
432428

433-
using var imageSource = VisionSource.FromUrl(
434-
new Uri(photo.Uri.ToString()));
429+
ImageAnalysisOptions analysisOptions = new ImageAnalysisOptions()
430+
{
431+
GenderNeutralCaption = true,
432+
Language = "en",
433+
};
435434

436-
using var analyzer = new ImageAnalyzer(serviceOptions, imageSource, analysisOptions);
437-
var result = analyzer.Analyze();
435+
Uri imageURL = new Uri(photo.Uri.ToString());
436+
437+
ImageAnalysisResult result = client.Analyze(imageURL,visualFeatures,analysisOptions);
438438

439439
// Record the image description and tags in blob metadata
440-
photo.Metadata.Add("Caption", result.Caption.ContentCaption.Content);
440+
photo.Metadata.Add("Caption", result.Caption.Text);
441441

442-
for (int i = 0; i < result.Tags.ContentTags.Count; i++)
442+
for (int i = 0; i < result.Tags.Values.Count; i++)
443443
{
444444
string key = String.Format("Tag{0}", i);
445-
photo.Metadata.Add(key, result.Tags.ContentTags[i]);
445+
photo.Metadata.Add(key, result.Tags.Values[i]);
446446
}
447447

448448
await photo.SetMetadataAsync();
@@ -554,7 +554,7 @@ In this section, you will add a search box to the home page, enabling users to d
554554
}
555555
```
556556

557-
Observe that the **Index** method now accepts a parameter _id_ that contains the value the user typed into the search box. An empty or missing _id_ parameter indicates that all the photos should be displayed.
557+
Observe that the **Index** method now accepts a parameter `id` that contains the value the user typed into the search box. An empty or missing `id` parameter indicates that all the photos should be displayed.
558558

559559
1. Add the following helper method to the **HomeController** class:
560560

articles/ai-services/computer-vision/concept-face-detection.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ Attributes are a set of features that can optionally be detected by the [Face -
7070
>[!NOTE]
7171
> The availability of each attribute depends on the detection model specified. QualityForRecognition attribute also depends on the recognition model, as it is currently only available when using a combination of detection model detection_01 or detection_03, and recognition model recognition_03 or recognition_04.
7272
73-
## Input data
73+
## Input requirements
7474

7575
Use the following tips to make sure that your input images give the most accurate detection results:
7676

articles/ai-services/computer-vision/concept-face-recognition.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,13 +11,13 @@ ms.subservice: azure-ai-face
1111
ms.custom:
1212
- ignite-2023
1313
ms.topic: conceptual
14-
ms.date: 12/27/2022
14+
ms.date: 02/14/2024
1515
ms.author: pafarley
1616
---
1717

1818
# Face recognition
1919

20-
This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, face recognition is the act of verifying or identifying individuals by their faces. Face recognition is important in implementing the identification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be.
20+
This article explains the concept of Face recognition, its related operations, and the underlying data structures. Broadly, face recognition is the process of verifying or identifying individuals by their faces. Face recognition is important in implementing the identification scenario, which enterprises and apps can use to verify that a (remote) user is who they claim to be.
2121

2222
You can try out the capabilities of face recognition quickly and easily using Vision Studio.
2323
> [!div class="nextstepaction"]
@@ -48,7 +48,7 @@ The recognition operations use mainly the following data structures. These objec
4848

4949
See the [Face recognition data structures](./concept-face-recognition-data-structures.md) guide.
5050

51-
## Input data
51+
## Input requirements
5252

5353
Use the following tips to ensure that your input images give the most accurate recognition results:
5454

articles/ai-services/computer-vision/concept-shelf-analysis.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 05/03/2023
11+
ms.date: 02/14/2024
1212
ms.author: pafarley
1313
ms.custom: references_regions, build-2023, build-2023-dataai
1414
---
@@ -35,13 +35,13 @@ Try out the capabilities of Product Recognition quickly and easily in your brows
3535

3636
## Product Recognition features
3737

38-
### Shelf Image Composition
38+
### Shelf image composition
3939

4040
The [stitching and rectification APIs](./how-to/shelf-modify-images.md) let you modify images to improve the accuracy of the Product Understanding results. You can use these APIs to:
4141
* Stitch together multiple images of a shelf to create a single image.
4242
* Rectify an image to remove perspective distortion.
4343

44-
### Shelf Product Recognition (pretrained model)
44+
### Shelf product recognition (pretrained model)
4545

4646
The [Product Understanding API](./how-to/shelf-analyze.md) lets you analyze a shelf image using the out-of-box pretrained model. This operation detects products and gaps in the shelf image and returns the bounding box coordinates of each product and gap, along with a confidence score for each.
4747

@@ -90,7 +90,7 @@ The following JSON response illustrates what the Product Understanding API retur
9090
}
9191
```
9292

93-
### Shelf Product Recognition - Custom (customized model)
93+
### Shelf product recognition (customized model)
9494

9595
The Product Understanding API can also be used with a [custom trained model](./how-to/shelf-model-customization.md) to detect your specific products. This operation returns the bounding box coordinates of each product and gap, along with the label of each product.
9696

@@ -139,7 +139,7 @@ The following JSON response illustrates what the Product Understanding API retur
139139
}
140140
```
141141

142-
### Shelf Planogram Compliance (preview)
142+
### Shelf planogram compliance
143143

144144
The [Planogram matching API](./how-to/shelf-planogram.md) lets you compare the results of the Product Understanding API to a planogram document. This operation matches each detected product and gap to its corresponding position in the planogram document.
145145

@@ -183,3 +183,4 @@ It returns a JSON response that accounts for each position in the planogram docu
183183
Get started with Product Recognition by trying out the stitching and rectification APIs. Then do basic analysis with the Product Understanding API.
184184
* [Prepare images for Product Recognition](./how-to/shelf-modify-images.md)
185185
* [Analyze a shelf image](./how-to/shelf-analyze.md)
186+
* [API reference](https://eastus.dev.cognitive.microsoft.com/docs/services/unified-vision-apis-public-preview-2023-04-01-preview/operations/644aba14fb42681ae06f1b0b)

0 commit comments

Comments
 (0)