You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/autoscale.md
+24-23Lines changed: 24 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,20 +7,20 @@ ms.service: azure-ai-services
7
7
ms.custom:
8
8
- ignite-2023
9
9
ms.topic: how-to
10
-
ms.date: 01/20/2024
10
+
ms.date: 01/10/2025
11
11
---
12
12
13
13
# Autoscale AI services limits
14
14
15
-
This article provides guidance for how customers can access higher rate limits on their Azure AI services resources.
15
+
This article provides guidance for how customers can access higher rate limits on certain Azure AI services resources.
16
16
17
17
## Overview
18
18
19
19
Each Azure AI services resource has a pre-configured static call rate (transactions per second) which limits the number of concurrent calls that customers can make to the backend service in a given time frame. The autoscale feature will automatically increase/decrease a customer's resource's rate limits based on near-real-time resource usage metrics and backend service capacity metrics.
20
20
21
21
## Get started with the autoscale feature
22
22
23
-
This feature is disabled by default for every new resource. Follow these instructions to enable it.
23
+
This feature is disabled by default for every new resource. [If your resource supports autoscale](#which-services-support-the-autoscale-feature), follow these instructions to enable it:
Autoscale feature is available in the paid subscription tier of the following services:
47
+
48
+
*[Azure AI Vision](computer-vision/index.yml)
49
+
*[Language](language-service/overview.md) (only available for sentiment analysis, key phrase extraction, named entity recognition, and text analytics for health)
### Can I test this feature using a free subscription?
61
+
62
+
No, the autoscale feature isn't available to free tier subscriptions.
63
+
44
64
### Does enabling the autoscale feature mean my resource will never be throttled again?
45
65
46
66
No, you may still get `429` errors for rate limit excess. If your application triggers a spike, and your resource reports a `429` response, autoscale checks the available capacity projection section to see whether the current capacity can accommodate a rate limit increase and respond within five minutes.
@@ -63,29 +83,10 @@ Be aware of potential errors and their consequences. If a bug in your client app
63
83
64
84
Yes, you can disable the autoscale feature through Azure portal or CLI and return to your default call rate limit setting. If your resource was previously approved for a higher default TPS, it goes back to that rate. It can take up to five minutes for the changes to go into effect.
65
85
66
-
### Which services support the autoscale feature?
67
-
68
-
Autoscale feature is available for the following services:
69
-
70
-
*[Azure AI Vision](computer-vision/index.yml)
71
-
*[Language](language-service/overview.md) (only available for sentiment analysis, key phrase extraction, named entity recognition, and text analytics for health)
### Can I test this feature using a free subscription?
83
-
84
-
No, the autoscale feature isn't available to free tier subscriptions.
85
86
86
87
## Next steps
87
88
88
89
*[Plan and Manage costs for Azure AI services](../ai-studio/how-to/costs-plan-manage.md).
89
-
*[Optimize your cloud investment with Azure Cost Management](/azure/cost-management-billing/costs/cost-mgt-best-practices?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
90
+
*[Optimize your cloud investment with Microsoft Cost Management](/azure/cost-management-billing/costs/cost-mgt-best-practices?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
90
91
* Learn about how to [prevent unexpected costs](/azure/cost-management-billing/cost-management-billing-overview?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
91
92
* Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/concept-brand-detection.md
+9-4Lines changed: 9 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,17 +8,19 @@ manager: nitinme
8
8
9
9
ms.service: azure-ai-vision
10
10
ms.topic: conceptual
11
-
ms.date: 01/19/2024
11
+
ms.date: 01/22/2025
12
12
ms.author: pafarley
13
13
---
14
14
15
15
# Brand detection
16
16
17
-
Brand detection is a specialized mode of [object detection](concept-object-detection.md) that uses a database of thousands of global logos to identify commercial brands in images or video. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement.
17
+
Brand detection is a specialized mode of [object detection](concept-object-detection.md) that uses a database of thousands of global corporate logos to identify commercial brands in images or video. You can use this feature, for example, to discover which brands are most popular on social media or most prevalent in media product placement.
18
+
19
+
## How it works
18
20
19
21
The Azure AI Vision service detects whether there are brand logos in a given image. If a brand logo is detected, the service returns the brand name, a confidence score, and the coordinates of a bounding box around the logo.
20
22
21
-
The built-in logo database covers popular brands in consumer electronics, clothing, and more. If you find that the Vision service doesn't detect the brand you're looking for, you could also try creating and training your own logo detector using the [Custom Vision](../custom-vision-service/index.yml) service.
23
+
The built-in logo database covers popular brands in consumer electronics, clothing, and more. If you find that the Vision service doesn't detect the brand you're looking for, you can also try creating and training your own logo detector using the [Custom Vision](../custom-vision-service/index.yml) service.
22
24
23
25
## Brand detection example
24
26
@@ -71,4 +73,7 @@ In some cases, the brand detector picks up both the logo image and the stylized
71
73
72
74
The brand detection feature is part of the [Analyze Image](/rest/api/computervision/analyze-image) API. You can call this API by using a native SDK or through REST calls. Include `Brands` in the `visualFeatures` query parameter. Then, when you get the full JSON response, parse the string for the contents of the `"brands"` section.
@@ -37,4 +37,7 @@ The "adult" classification contains several different categories:
37
37
38
38
You can detect adult content with the [Analyze Image 3.2](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) API. When you add the value of `Adult` to the **visualFeatures** query parameter, the API returns three boolean properties—`isAdultContent`, `isRacyContent`, and `isGoryContent`—in its JSON response. The method also returns corresponding properties—`adultScore`, `racyScore`, and `goreScore`—which represent confidence scores between zero and one for each respective category.
39
39
40
-
-[Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
40
+
## Next step
41
+
42
+
> [!div class="nextstepaction"]
43
+
> [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/concept-detecting-color-schemes.md
+4-1Lines changed: 4 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -80,4 +80,7 @@ The following table shows Azure AI Vision's black and white evaluation in the sa
80
80
81
81
The color scheme detection feature is part of the [Analyze Image 3.2](/rest/api/computervision/analyze-image?view=rest-computervision-v3.2) API. You can call this API through a native SDK or through REST calls. Include `Color` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"color"` section.
82
82
83
-
*[Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
83
+
## Next step
84
+
85
+
> [!div class="nextstepaction"]
86
+
> [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
Copy file name to clipboardExpand all lines: articles/ai-services/computer-vision/concept-detecting-faces.md
+5-2Lines changed: 5 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ manager: nitinme
8
8
9
9
ms.service: azure-ai-vision
10
10
ms.topic: conceptual
11
-
ms.date: 01/19/2024
11
+
ms.date: 01/22/2025
12
12
ms.author: pafarley
13
13
---
14
14
@@ -112,4 +112,7 @@ The next example demonstrates the JSON response returned for an image containing
112
112
113
113
The face detection feature is part of the [Analyze Image 3.2](/rest/api/computervision/analyze-image/analyze-image?view=rest-computervision-v3.2&tabs=HTTP) API. You can call this API through a native SDK or through REST calls. Include `Faces` in the **visualFeatures** query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the `"faces"` section.
114
114
115
-
*[Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
115
+
## Next step
116
+
117
+
> [!div class="nextstepaction"]
118
+
> [Quickstart: Vision REST API or client libraries](./quickstarts-sdk/image-analysis-client-library.md?pivots=programming-language-csharp)
@@ -24,14 +24,14 @@ This article explains the data structures used in the Face service for face reco
24
24
25
25
## Data structures used with Identify
26
26
27
-
The Face Identify API uses container data structures to the hold face recognition data in the form of **Person** objects. There are three types of containers for this, listed from oldest to newest. We recommend you always use the newest one.
27
+
The Face Identify API uses container data structures to the hold face recognition data in the form of **Person** objects. There are three types of containers for this purpose, listed from oldest to newest. We recommend you always use the newest one.
28
28
29
29
### PersonGroup
30
30
31
31
**PersonGroup** is the smallest container data structure.
32
32
- You need to specify a recognition model when you create a **PersonGroup**. When any faces are added to that **PersonGroup**, it uses that model to process them. This model must match the model version with Face ID from detect API.
33
33
- You must call the Train API to make any new face data reflect in the Identify API results. This includes adding/removing faces and adding/removing persons.
34
-
- For the free tier subscription, it can hold up to 1000 Persons. For S0 paid subscription, it can have up to 10,000 Persons.
34
+
- For the free tier subscription, it can hold up to 1,000 Persons. For S0 paid subscription, it can have up to 10,000 Persons.
35
35
36
36
**PersonGroupPerson** represents a person to be identified. It can hold up to 248 faces.
37
37
@@ -45,7 +45,7 @@ The Face Identify API uses container data structures to the hold face recognitio
45
45
46
46
**PersonDirectory** is the newest data structure of this kind. It supports a larger scale and higher accuracy. Each Azure Face resource has a single default **PersonDirectory** data structure. It's a flat list of **PersonDirectoryPerson** objects - it can hold up to 20 million.
47
47
48
-
**PersonDirectoryPerson** represents a person to be identified. Updated from the **PersonGroupPerson** model, it allows you to add faces from different recognition models to the same person. However, the Identify operation can only match faces obtained with the same recognition model.
48
+
**PersonDirectoryPerson** represents a person to be identified. Based on the older**PersonGroupPerson** model, it allows you to add faces from different recognition models to the same person. However, the Identify operation can only match faces obtained with the same recognition model.
49
49
50
50
**DynamicPersonGroup** is a lightweight data structure that allows you to dynamically reference a **PersonDirectoryPerson**. It doesn't require the Train operation: once the data is updated, it's ready to be used with the Identify API.
51
51
@@ -61,26 +61,27 @@ For more details, please refer to the [PersonDirectory how-to guide](./how-to/us
61
61
| --- | --- | --- |
62
62
| Capacity | A **LargePersonGroup** can hold up to 1 million **PersonGroupPerson** objects. | The collection can store up to 20 millions **PersonDirectoryPerson** identities. |
| Ownership | The **PersonGroupPerson** objects are exclusively owned by the **LargePersonGroup** they belong to. If you want a same identity kept in multiple groups, you will have to [Create Large Person Group Person](/rest/api/face/person-group-operations/create-large-person-group-person) and [Add Large Person Group Person Face](/rest/api/face/person-group-operations/add-large-person-group-person-face) for each group individually, ending up with a set of person IDs in several groups. | The **PersonDirectoryPerson** objects are directly stored inside the **PersonDirectory**, as a flat list. You can use an in-place person ID list to [Identify From Person Directory](/rest/api/face/face-recognition-operations/identify-from-person-directory), or optionally [Create Dynamic Person Group](/rest/api/face/person-directory-operations/create-dynamic-person-group) and hybridly include a person into the group. A created **PersonDirectoryPerson** object can be referenced by multiple **DynamicPersonGroup** without duplication. |
65
-
| Model | The recognition model is determined by the **LargePersonGroup**. New faces for all **PersonGroupPerson** objects will become associated with this model when they're added to it. | The **PersonDirectoryPerson** object prepares separated storage per recognition model. You can specify the model when you add new faces, but the Identify API can only match faces obtained with the same recognition model, that is associated with the query faces. |
64
+
| Ownership | The **PersonGroupPerson** objects are exclusively owned by the **LargePersonGroup** they belong to. If you want a same identity kept in multiple groups, you'll have to [Create Large Person Group Person](/rest/api/face/person-group-operations/create-large-person-group-person) and [Add Large Person Group Person Face](/rest/api/face/person-group-operations/add-large-person-group-person-face) for each group individually, ending up with a set of person IDs in several groups. | The **PersonDirectoryPerson** objects are directly stored inside the **PersonDirectory**, as a flat list. You can use an in-place person ID list to [Identify From Person Directory](/rest/api/face/face-recognition-operations/identify-from-person-directory), or optionally [Create Dynamic Person Group](/rest/api/face/person-directory-operations/create-dynamic-person-group) and hybridly include a person into the group. A created **PersonDirectoryPerson** object can be referenced by multiple **DynamicPersonGroup** without duplication. |
65
+
| Model | The recognition model is determined by the **LargePersonGroup**. New faces for all **PersonGroupPerson** objects become associated with this model when they're added to it. | The **PersonDirectoryPerson** object prepares separated storage per recognition model. You can specify the model when you add new faces, but the Identify API can only match faces obtained with the same recognition model, that is associated with the query faces. |
66
66
| Training | You must call the Train API to make any new face/person data reflect in the Identify API results. | There's no need to make Train calls, but API such as [Add Person Face](/rest/api/face/person-directory-operations/add-person-face) becomes a long running operation, which means you should use the response header "Operation-Location" to check if the update completes. |
67
-
| Cleanup |[Delete Large Person Group](/rest/api/face/person-group-operations/delete-large-person-group) will also delete the all the **PersonGroupPerson** objects it holds, as well as their face data. |[Delete Dynamic Person Group](/rest/api/face/person-directory-operations/delete-dynamic-person-group) will only unreference the **PersonDirectoryPerson**. To delete actual person and the face data, see [Delete Person](/rest/api/face/person-directory-operations/delete-person). |
67
+
| Cleanup |[Delete Large Person Group](/rest/api/face/person-group-operations/delete-large-person-group) will also delete the all the **PersonGroupPerson** objects it holds, along with their face data. |[Delete Dynamic Person Group](/rest/api/face/person-directory-operations/delete-dynamic-person-group) will only unreference the **PersonDirectoryPerson**. To delete actual person and the face data, see [Delete Person](/rest/api/face/person-directory-operations/delete-person). |
68
68
69
69
70
70
## Data structures used with Find Similar
71
71
72
-
Unlike the Identify API, the Find Similar API is designed to be used in applications where the enrollment of **Person** is hard to set up (for example, face images captured from video analysis, or from a photo album analysis).
72
+
Unlike the Identify API, the Find Similar API is used in applications where the enrollment of a**Person** is hard to set up (for example, face images captured from video analysis, or from a photo album analysis).
73
73
74
74
### FaceList
75
75
76
-
**FaceList**represent a flat list of persisted faces. It can hold up 1,000 faces.
76
+
**FaceList**represents a flat list of persisted faces. It can hold up 1,000 faces.
77
77
78
78
### LargeFaceList
79
79
80
80
**LargeFaceList** is a later version which can hold up to 1,000,000 faces.
81
81
82
-
## Next steps
82
+
## Next step
83
83
84
84
Now that you're familiar with the face data structures, write a script that uses them in the Identify operation.
0 commit comments