You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/foundry-models/quotas-limits.md
+35-27Lines changed: 35 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,19 +6,19 @@ author: msakande
6
6
ms.service: azure-ai-model-inference
7
7
ms.custom: ignite-2024, github-universe-2024
8
8
ms.topic: concept-article
9
-
ms.date: 05/19/2025
9
+
ms.date: 08/14/2025
10
10
ms.author: mopeakande
11
-
ms.reviewer: fasantia
12
-
reviewer: santiagxf
11
+
ms.reviewer: shiyingfu
12
+
reviewer: swingfu
13
13
---
14
14
15
15
# Azure AI Foundry Models quotas and limits
16
16
17
-
This article contains a quick reference and a detailed description of the quotas and limits for Azure AI Foundry Models. For quotas and limits specific to the Azure OpenAI in Foundry Models, see [Quota and limits in Azure OpenAI](../openai/quotas-limits.md).
17
+
This article provides a quick reference and detailed description of the quotas and limits for Azure AI Foundry Models. For quotas and limits specific to the Azure OpenAI in Foundry Models, see [Quota and limits in Azure OpenAI](../openai/quotas-limits.md).
18
18
19
19
## Quotas and limits reference
20
20
21
-
Azure uses quotas and limits to prevent budget overruns due to fraud, and to honor Azure capacity constraints. Consider these limits as you scale for production workloads. The following sections provide you with a quick guide to the default quotas and limits that apply to Azure AI model's inference service in Azure AI Foundry:
21
+
Azure uses quotas and limits to prevent budget overruns due to fraud and to honor Azure capacity constraints. Consider these limits as you scale for production workloads. The following sections provide a quick guide to the default quotas and limits that apply to Azure AI model inference service in Azure AI Foundry:
22
22
23
23
### Resource limits
24
24
@@ -30,58 +30,66 @@ Azure uses quotas and limits to prevent budget overruns due to fraud, and to hon
The following table lists limits for Foundry Models for the following rates:
43
34
44
-
You can [request increases to the default limits](#request-increases-to-the-default-limits). Due to high demand, limit increase requests can be submitted and evaluated per request.
35
+
- Tokens per minute
36
+
- Requests per minute
37
+
- Concurrent request
38
+
39
+
| Models | Tokens per minute | Requests per minute | Concurrent requests |
| Azure OpenAI models | Varies per model and SKU. See [limits for Azure OpenAI](../openai/quotas-limits.md). | Varies per model and SKU. See [limits for Azure OpenAI](../openai/quotas-limits.md). | not applicable |
| - Flux-Pro 1.1<br />- Flux.1-Kontext Pro | not applicable | 2 capacity units (6 requests per minute) | not applicable |
45
+
| Rest of models | 400,000 | 1,000 | 300 |
46
+
47
+
To increase your quota:
48
+
49
+
- For Azure OpenAI, use [Azure AI Foundry Service: Request for Quota Increase](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4xPXO648sJKt4GoXAed-0pUMFE1Rk9CU084RjA0TUlVSUlMWEQzVkJDNCQlQCN0PWcu) to submit your request.
50
+
- For other models, see [request increases to the default limits](#request-increases-to-the-default-limits).
51
+
52
+
Due to high demand, we evaluate limit increase requests per request.
45
53
46
54
### Other limits
47
55
48
56
| Limit name | Limit value |
49
57
|--|--|
50
58
| Max number of custom headers in API requests<sup>1</sup> | 10 |
51
59
52
-
<sup>1</sup> Our current APIs allow up to 10 custom headers, which are passed through the pipeline, and returned. We have noticed some customers now exceed this header count resulting in HTTP 431 errors. There is no solution for this error, other than to reduce header volume. **In future API versions we will no longer pass through custom headers**. We recommend customers not depend on custom headers in future system architectures.
60
+
<sup>1</sup> Our current APIs allow up to 10 custom headers, which the pipeline passes through and returns. If you exceed this header count, your request results in an HTTP 431 error. To resolve this error, reduce the header volume. **Future API versions won't pass through custom headers**. We recommend that you don't depend on custom headers in future system architectures.
53
61
54
62
## Usage tiers
55
63
56
-
Global Standard deployments use Azure's global infrastructure, dynamically routing customer traffic to the data center with best availability for the customer's inference requests. This enables more consistent latency for customers with low to medium levels of traffic. Customers with high sustained levels of usage might see more variabilities in response latency.
64
+
Global Standard deployments use Azure's global infrastructure to dynamically route customer traffic to the data center with best availability for the customer's inference requests. This infrastructure enables more consistent latency for customers with low to medium levels of traffic. Customers with high sustained levels of usage might see more variabilities in response latency.
57
65
58
66
The Usage Limit determines the level of usage above which customers might see larger variability in response latency. A customer's usage is defined per model and is the total tokens consumed across all deployments in all subscriptions in all regions for a given tenant.
59
67
60
68
## Request increases to the default limits
61
69
62
-
Limit increase requests can be submitted and evaluated per request. [Open an online customer support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest/). When requesting for endpoint limit increase, provide the following information:
70
+
You can submit limit increase requests, which we evaluate one at a time. [Open an online customer support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest/). When you request an endpoint limit increase, provide the following information:
63
71
64
-
1.When opening the support request, select **Service and subscription limits (quotas)** as the **Issue type**.
72
+
1.Select **Service and subscription limits (quotas)** as the **Issue type** when you open the support request.
65
73
66
-
1. Select the subscription of your choice.
74
+
1. Select the subscription you want to use.
67
75
68
76
1. Select **Cognitive Services** as **Quota type**.
69
77
70
78
1. Select **Next**.
71
79
72
-
1. On the **Additional details** tab, you need to provide detailed reasons for the limit increase in order for your request to be processed. Be sure to add the following information into the reason for limit increase:
80
+
1. On the **Additional details** tab, provide detailed reasons for the limit increase so that your request can be processed. Be sure to add the following information to the reason for limit increase:
73
81
74
82
* Model name, model version (if applicable), and deployment type (SKU).
75
83
* Description of your scenario and workload.
76
84
* Rationale for the requested increase.
77
-
*Provide the target throughput: Tokens per minute, requests per minute, etc.
78
-
*Provide planned time plan (by when you need increased limits).
85
+
*Target throughput: Tokens per minute, requests per minute, and other relevant metrics.
86
+
*Planned time plan (by when you need increased limits).
79
87
80
-
1.Finally, select **Save and continue** to continue.
88
+
1.Select **Save and continue**.
81
89
82
-
## General best practices to remain within rate limits
90
+
## General best practices to stay within rate limits
83
91
84
-
To minimize issues related to rate limits, it's a good idea to use the following techniques:
92
+
To minimize issues related to rate limits, use the following techniques:
85
93
86
94
- Implement retry logic in your application.
87
95
- Avoid sharp changes in the workload. Increase the workload gradually.
0 commit comments