Skip to content

Commit 5607367

Browse files
authored
Merge pull request #2453 from eric-urban/eur/model-inference-resolve-conflicts-1
[DIRTY PR] model inference resolve conflicts 1
2 parents 043b431 + a7c440c commit 5607367

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

52 files changed

+493
-558
lines changed

articles/ai-services/autoscale.md

Lines changed: 24 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -7,20 +7,20 @@ ms.service: azure-ai-services
77
ms.custom:
88
- ignite-2023
99
ms.topic: how-to
10-
ms.date: 01/20/2024
10+
ms.date: 01/10/2025
1111
---
1212

1313
# Autoscale AI services limits
1414

15-
This article provides guidance for how customers can access higher rate limits on their Azure AI services resources.
15+
This article provides guidance for how customers can access higher rate limits on certain Azure AI services resources.
1616

1717
## Overview
1818

1919
Each Azure AI services resource has a pre-configured static call rate (transactions per second) which limits the number of concurrent calls that customers can make to the backend service in a given time frame. The autoscale feature will automatically increase/decrease a customer's resource's rate limits based on near-real-time resource usage metrics and backend service capacity metrics.
2020

2121
## Get started with the autoscale feature
2222

23-
This feature is disabled by default for every new resource. Follow these instructions to enable it.
23+
This feature is disabled by default for every new resource. [If your resource supports autoscale](#which-services-support-the-autoscale-feature), follow these instructions to enable it:
2424

2525
#### [Azure portal](#tab/portal)
2626

@@ -41,6 +41,26 @@ az resource update --namespace Microsoft.CognitiveServices --resource-type accou
4141

4242
## Frequently asked questions
4343

44+
### Which services support the autoscale feature?
45+
46+
Autoscale feature is available in the paid subscription tier of the following services:
47+
48+
* [Azure AI Vision](computer-vision/index.yml)
49+
* [Language](language-service/overview.md) (only available for sentiment analysis, key phrase extraction, named entity recognition, and text analytics for health)
50+
* [Anomaly Detector](anomaly-detector/overview.md)
51+
* [Content Moderator](content-moderator/overview.md)
52+
* [Custom Vision (Prediction)](custom-vision-service/overview.md)
53+
* [Immersive Reader](immersive-reader/overview.md)
54+
* [LUIS](luis/what-is-luis.md)
55+
* [Metrics Advisor](metrics-advisor/overview.md)
56+
* [Personalizer](personalizer/what-is-personalizer.md)
57+
* [QnAMaker](qnamaker/overview/overview.md)
58+
* [Document Intelligence](document-intelligence/overview.md?tabs=v3-0)
59+
60+
### Can I test this feature using a free subscription?
61+
62+
No, the autoscale feature isn't available to free tier subscriptions.
63+
4464
### Does enabling the autoscale feature mean my resource will never be throttled again?
4565

4666
No, you may still get `429` errors for rate limit excess. If your application triggers a spike, and your resource reports a `429` response, autoscale checks the available capacity projection section to see whether the current capacity can accommodate a rate limit increase and respond within five minutes.
@@ -63,29 +83,10 @@ Be aware of potential errors and their consequences. If a bug in your client app
6383

6484
Yes, you can disable the autoscale feature through Azure portal or CLI and return to your default call rate limit setting. If your resource was previously approved for a higher default TPS, it goes back to that rate. It can take up to five minutes for the changes to go into effect.
6585

66-
### Which services support the autoscale feature?
67-
68-
Autoscale feature is available for the following services:
69-
70-
* [Azure AI Vision](computer-vision/index.yml)
71-
* [Language](language-service/overview.md) (only available for sentiment analysis, key phrase extraction, named entity recognition, and text analytics for health)
72-
* [Anomaly Detector](anomaly-detector/overview.md)
73-
* [Content Moderator](content-moderator/overview.md)
74-
* [Custom Vision (Prediction)](custom-vision-service/overview.md)
75-
* [Immersive Reader](immersive-reader/overview.md)
76-
* [LUIS](luis/what-is-luis.md)
77-
* [Metrics Advisor](metrics-advisor/overview.md)
78-
* [Personalizer](personalizer/what-is-personalizer.md)
79-
* [QnAMaker](qnamaker/overview/overview.md)
80-
* [Document Intelligence](document-intelligence/overview.md?tabs=v3-0)
81-
82-
### Can I test this feature using a free subscription?
83-
84-
No, the autoscale feature isn't available to free tier subscriptions.
8586

8687
## Next steps
8788

8889
* [Plan and Manage costs for Azure AI services](../ai-studio/how-to/costs-plan-manage.md).
89-
* [Optimize your cloud investment with Azure Cost Management](/azure/cost-management-billing/costs/cost-mgt-best-practices?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
90+
* [Optimize your cloud investment with Microsoft Cost Management](/azure/cost-management-billing/costs/cost-mgt-best-practices?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
9091
* Learn about how to [prevent unexpected costs](/azure/cost-management-billing/cost-management-billing-overview?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
9192
* Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.

articles/ai-services/computer-vision/concept-brand-detection.md

Lines changed: 9 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,17 +8,19 @@ manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 01/19/2024
11+
ms.date: 01/22/2025
1212
ms.author: pafarley
1313
---
1414

1515
# Brand detection
1616

17-
Brand detection is a specialized mode of [object detection](concept-object-detection.md) that uses a database of thousands of global logos to identify commercial brands in images or video. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement.
17+
Brand detection is a specialized mode of [object detection](concept-object-detection.md) that uses a database of thousands of global corporate logos to identify commercial brands in images or video. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement.
18+
19+
## How it works
1820

1921
The Azure AI Vision service detects whether there are brand logos in a given image. If a brand logo is detected, the service returns the brand name, a confidence score, and the coordinates of a bounding box around the logo.
2022

21-
The built-in logo database covers popular brands in consumer electronics, clothing, and more. If you find that the Vision service doesn't detect the brand you're looking for, you could also try creating and training your own logo detector using the [Custom Vision](../custom-vision-service/index.yml) service.
23+
The built-in logo database covers popular brands in consumer electronics, clothing, and more. If you find that the Vision service doesn't detect the brand you're looking for, you can also try creating and training your own logo detector using the [Custom Vision](../custom-vision-service/index.yml) service.
2224

2325
## Brand detection example
2426

@@ -71,4 +73,7 @@ In some cases, the brand detector picks up both the logo image and the stylized
7173

7274
The brand detection feature is part of the [Analyze Image](/rest/api/computervision/analyze-image) API. You can call this API by using a native SDK or through REST calls. Include `Brands` in the `visualFeatures` query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"brands"` section.
7375

74-
* [Quickstart: Image Analysis](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
76+
## Next step
77+
78+
> [!div class="nextstepaction"]
79+
> [Quickstart: Image Analysis](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)

articles/ai-services/computer-vision/concept-detecting-adult-content.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 01/19/2024
11+
ms.date: 01/22/2025
1212
ms.collection: "ce-skilling-fresh-tier2, ce-skilling-ai-copilot"
1313
ms.author: pafarley
1414
---
@@ -37,4 +37,7 @@ The "adult" classification contains several different categories:
3737

3838
You can detect adult content with the [Analyze Image 3.2](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) API. When you add the value of `Adult` to the **visualFeatures** query parameter, the API returns three boolean properties—`isAdultContent`, `isRacyContent`, and `isGoryContent`—in its JSON response. The method also returns corresponding properties—`adultScore`, `racyScore`, and `goreScore`—which represent confidence scores between zero and one for each respective category.
3939

40-
- [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
40+
## Next step
41+
42+
> [!div class="nextstepaction"]
43+
> [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)

articles/ai-services/computer-vision/concept-detecting-color-schemes.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,4 +80,7 @@ The following table shows Azure AI Vision's black and white evaluation in the sa
8080

8181
The color scheme detection feature is part of the [Analyze Image 3.2](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) API. You can call this API through a native SDK or through REST calls. Include `Color` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"color"` section.
8282

83-
* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
83+
## Next step
84+
85+
> [!div class="nextstepaction"]
86+
> [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)

articles/ai-services/computer-vision/concept-detecting-faces.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88

99
ms.service: azure-ai-vision
1010
ms.topic: conceptual
11-
ms.date: 01/19/2024
11+
ms.date: 01/22/2025
1212
ms.author: pafarley
1313
---
1414

@@ -112,4 +112,7 @@ The next example demonstrates the JSON response returned for an image containing
112112

113113
The face detection feature is part of the [Analyze Image 3.2](/rest/api/computervision/analyze-image/analyze-image?view=rest-computervision-v3.2&tabs=HTTP) API. You can call this API through a native SDK or through REST calls. Include `Faces` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"faces"` section.
114114

115-
* [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
115+
## Next step
116+
117+
> [!div class="nextstepaction"]
118+
> [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)

articles/ai-services/computer-vision/concept-face-recognition-data-structures.md

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: "Face recognition data structures - Face"
33
titleSuffix: Azure AI services
4-
description: Learn about the Face recognition data structures, which hold data on faces and persons.
4+
description: Learn about the Face recognition data structures, which store data on faces and persons.
55
#services: cognitive-services
66
author: PatrickFarley
77
manager: nitinme
@@ -11,7 +11,7 @@ ms.subservice: azure-ai-face
1111
ms.custom:
1212
- ignite-2023
1313
ms.topic: conceptual
14-
ms.date: 11/04/2023
14+
ms.date: 01/22/2025
1515
ms.author: pafarley
1616
feedback_help_link_url: https://learn.microsoft.com/answers/tags/156/azure-face
1717
---
@@ -24,14 +24,14 @@ This article explains the data structures used in the Face service for face reco
2424

2525
## Data structures used with Identify
2626

27-
The Face Identify API uses container data structures to the hold face recognition data in the form of **Person** objects. There are three types of containers for this, listed from oldest to newest. We recommend you always use the newest one.
27+
The Face Identify API uses container data structures to the hold face recognition data in the form of **Person** objects. There are three types of containers for this purpose, listed from oldest to newest. We recommend you always use the newest one.
2828

2929
### PersonGroup
3030

3131
**PersonGroup** is the smallest container data structure.
3232
- You need to specify a recognition model when you create a **PersonGroup**. When any faces are added to that **PersonGroup**, it uses that model to process them. This model must match the model version with Face ID from detect API.
3333
- You must call the Train API to make any new face data reflect in the Identify API results. This includes adding/removing faces and adding/removing persons.
34-
- For the free tier subscription, it can hold up to 1000 Persons. For S0 paid subscription, it can have up to 10,000 Persons.
34+
- For the free tier subscription, it can hold up to 1,000 Persons. For S0 paid subscription, it can have up to 10,000 Persons.
3535

3636
**PersonGroupPerson** represents a person to be identified. It can hold up to 248 faces.
3737

@@ -45,7 +45,7 @@ The Face Identify API uses container data structures to the hold face recognitio
4545

4646
**PersonDirectory** is the newest data structure of this kind. It supports a larger scale and higher accuracy. Each Azure Face resource has a single default **PersonDirectory** data structure. It's a flat list of **PersonDirectoryPerson** objects - it can hold up to 20 million.
4747

48-
**PersonDirectoryPerson** represents a person to be identified. Updated from the **PersonGroupPerson** model, it allows you to add faces from different recognition models to the same person. However, the Identify operation can only match faces obtained with the same recognition model.
48+
**PersonDirectoryPerson** represents a person to be identified. Based on the older **PersonGroupPerson** model, it allows you to add faces from different recognition models to the same person. However, the Identify operation can only match faces obtained with the same recognition model.
4949

5050
**DynamicPersonGroup** is a lightweight data structure that allows you to dynamically reference a **PersonDirectoryPerson**. It doesn't require the Train operation: once the data is updated, it's ready to be used with the Identify API.
5151

@@ -61,26 +61,27 @@ For more details, please refer to the [PersonDirectory how-to guide](./how-to/us
6161
| --- | --- | --- |
6262
| Capacity | A **LargePersonGroup** can hold up to 1 million **PersonGroupPerson** objects. | The collection can store up to 20 millions **PersonDirectoryPerson** identities. |
6363
| PersonURI | `/largepersongroups/{groupId}/persons/{personId}` | `(/v1.0-preview-or-above)/persons/{personId}` |
64-
| Ownership | The **PersonGroupPerson** objects are exclusively owned by the **LargePersonGroup** they belong to. If you want a same identity kept in multiple groups, you will have to [Create Large Person Group Person](/rest/api/face/person-group-operations/create-large-person-group-person) and [Add Large Person Group Person Face](/rest/api/face/person-group-operations/add-large-person-group-person-face) for each group individually, ending up with a set of person IDs in several groups. | The **PersonDirectoryPerson** objects are directly stored inside the **PersonDirectory**, as a flat list. You can use an in-place person ID list to [Identify From Person Directory](/rest/api/face/face-recognition-operations/identify-from-person-directory), or optionally [Create Dynamic Person Group](/rest/api/face/person-directory-operations/create-dynamic-person-group) and hybridly include a person into the group. A created **PersonDirectoryPerson** object can be referenced by multiple **DynamicPersonGroup** without duplication. |
65-
| Model | The recognition model is determined by the **LargePersonGroup**. New faces for all **PersonGroupPerson** objects will become associated with this model when they're added to it. | The **PersonDirectoryPerson** object prepares separated storage per recognition model. You can specify the model when you add new faces, but the Identify API can only match faces obtained with the same recognition model, that is associated with the query faces. |
64+
| Ownership | The **PersonGroupPerson** objects are exclusively owned by the **LargePersonGroup** they belong to. If you want a same identity kept in multiple groups, you'll have to [Create Large Person Group Person](/rest/api/face/person-group-operations/create-large-person-group-person) and [Add Large Person Group Person Face](/rest/api/face/person-group-operations/add-large-person-group-person-face) for each group individually, ending up with a set of person IDs in several groups. | The **PersonDirectoryPerson** objects are directly stored inside the **PersonDirectory**, as a flat list. You can use an in-place person ID list to [Identify From Person Directory](/rest/api/face/face-recognition-operations/identify-from-person-directory), or optionally [Create Dynamic Person Group](/rest/api/face/person-directory-operations/create-dynamic-person-group) and hybridly include a person into the group. A created **PersonDirectoryPerson** object can be referenced by multiple **DynamicPersonGroup** without duplication. |
65+
| Model | The recognition model is determined by the **LargePersonGroup**. New faces for all **PersonGroupPerson** objects become associated with this model when they're added to it. | The **PersonDirectoryPerson** object prepares separated storage per recognition model. You can specify the model when you add new faces, but the Identify API can only match faces obtained with the same recognition model, that is associated with the query faces. |
6666
| Training | You must call the Train API to make any new face/person data reflect in the Identify API results. | There's no need to make Train calls, but API such as [Add Person Face](/rest/api/face/person-directory-operations/add-person-face) becomes a long running operation, which means you should use the response header "Operation-Location" to check if the update completes. |
67-
| Cleanup | [Delete Large Person Group](/rest/api/face/person-group-operations/delete-large-person-group) will also delete the all the **PersonGroupPerson** objects it holds, as well as their face data. | [Delete Dynamic Person Group](/rest/api/face/person-directory-operations/delete-dynamic-person-group) will only unreference the **PersonDirectoryPerson**. To delete actual person and the face data, see [Delete Person](/rest/api/face/person-directory-operations/delete-person). |
67+
| Cleanup | [Delete Large Person Group](/rest/api/face/person-group-operations/delete-large-person-group) will also delete the all the **PersonGroupPerson** objects it holds, along with their face data. | [Delete Dynamic Person Group](/rest/api/face/person-directory-operations/delete-dynamic-person-group) will only unreference the **PersonDirectoryPerson**. To delete actual person and the face data, see [Delete Person](/rest/api/face/person-directory-operations/delete-person). |
6868

6969

7070
## Data structures used with Find Similar
7171

72-
Unlike the Identify API, the Find Similar API is designed to be used in applications where the enrollment of **Person** is hard to set up (for example, face images captured from video analysis, or from a photo album analysis).
72+
Unlike the Identify API, the Find Similar API is used in applications where the enrollment of a **Person** is hard to set up (for example, face images captured from video analysis, or from a photo album analysis).
7373

7474
### FaceList
7575

76-
**FaceList** represent a flat list of persisted faces. It can hold up 1,000 faces.
76+
**FaceList** represents a flat list of persisted faces. It can hold up 1,000 faces.
7777

7878
### LargeFaceList
7979

8080
**LargeFaceList** is a later version which can hold up to 1,000,000 faces.
8181

82-
## Next steps
82+
## Next step
8383

8484
Now that you're familiar with the face data structures, write a script that uses them in the Identify operation.
8585

86-
* [Face quickstart](./quickstarts-sdk/identity-client-library.md)
86+
> [!div class="nextstepaction"]
87+
> [Face quickstart](./quickstarts-sdk/identity-client-library.md)

0 commit comments

Comments
 (0)