You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/provisioned-throughput.md
+40-23Lines changed: 40 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -30,35 +30,34 @@ An Azure OpenAI Deployment is a unit of management for a specific OpenAI Model.
30
30
31
31
| Topic | Provisioned|
32
32
|---|---|
33
-
| What is it? |Provides guaranteed throughput at smaller increments than the existing provisioned offer. Deployments have a consistent max latency for a given model-version. |
33
+
| What is it? |Provides guaranteed throughput at smaller increments than the existing provisioned offer. Deployments have a consistent max latency for a given model-version. |
34
34
| Who is it for? | Customers who want guaranteed throughput with minimal latency variance. |
35
35
| Quota |Provisioned Managed Throughput Unit, Global Provisioned Managed Throughput Unit, or Data Zone Provisioned Managed Throughput Unit assigned per region. Quota can be used across any available Azure OpenAI model.|
36
36
| Latency | Max latency constrained from the model. Overall latency is a factor of call shape. |
|Estimating size |Provided calculator in Azure AI Foundry & benchmarking script. |
38
+
|Estimating size |Provided sizing calculator in Azure AI Foundry.|
39
39
|Prompt caching | For supported models, we discount up to 100% of cached input tokens. |
40
40
41
41
42
42
## How much throughput per PTU you get for each model
43
-
The amount of throughput (tokens per minute or TPM) a deployment gets per PTU is a function of the input and output tokens in the minute. Generating output tokens requires more processing than input tokens and so the more output tokens generated the lower your overall TPM. The service dynamically balances the input & output costs, so users do not have to set specific input and output limits. This approach means your deployment is resilient to fluctuations in the workload shape.
43
+
The amount of throughput (tokens per minute or TPM) a deployment gets per PTU is a function of the input and output tokens in the minute. Generating output tokens requires more processing than input tokens. For the models specified in the table below, 1 output token counts as 3 input tokens towards your TPM per PTU limit. The service dynamically balances the input & output costs, so users do not have to set specific input and output limits. This approach means your deployment is resilient to fluctuations in the workload shape.
44
44
45
-
To help with simplifying the sizing effort, the following table outlines the TPM per PTU for the `gpt-4o` and `gpt-4o-mini`models which represent the max TPM assuming all traffic is either input or output. To understand how different ratios of input and output tokens impact your Max TPM per PTU, see the [Azure OpenAI capacity calculator](https://oai.azure.com/portal/calculator). The table also shows Service Level Agreement (SLA) Latency Target Values per model. For more information about the SLA for Azure OpenAI Service, see the [Service Level Agreements (SLA) for Online Services page](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1)
45
+
To help with simplifying the sizing effort, the following table outlines the TPM per PTU for the specified models. To understand the impact of output tokens on the TPM per PTU limit, use the 3 input token to 1 output token ratio. For a detailed understanding of how different ratios of input and output tokens impact the throughput your workload needs, see the [Azure OpenAI capacity calculator](https://oai.azure.com/portal/calculator). The table also shows Service Level Agreement (SLA) Latency Target Values per model. For more information about the SLA for Azure OpenAI Service, see the [Service Level Agreements (SLA) for Online Services page](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1)
|Latency Target Value |25 Tokens Per Second|33 Tokens Per Second|
56
55
57
56
For a full list see the [Azure OpenAI Service in Azure AI Foundry portal calculator](https://oai.azure.com/portal/calculator).
58
57
59
58
60
59
> [!NOTE]
61
-
> Global provisioned deployments are only supported for gpt-4o, 2024-08-06 and gpt-4o-mini, 2024-07-18 models at this time. Data zone provisioned deployments are only supported for gpt-4o, 2024-08-06, gpt-4o, 2024-05-13, and gpt-4o-mini, 2024-07-18 models at this time. For more information on model availability, review the [models documentation](./models.md).
60
+
> Global provisioned and data zone provisioned deployments are only supported for gpt-4oand gpt-4o-mini models at this time. For more information on model availability, review the [models documentation](./models.md).
62
61
63
62
## Key concepts
64
63
@@ -73,11 +72,11 @@ az cognitiveservices account deployment create \
73
72
--name <myResourceName> \
74
73
--resource-group <myResourceGroupName> \
75
74
--deployment-name MyDeployment \
76
-
--model-name gpt-4 \
77
-
--model-version 0613 \
75
+
--model-name gpt-4o \
76
+
--model-version 2024-08-06 \
78
77
--model-format OpenAI \
79
-
--sku-capacity 100 \
80
-
--sku-name ProvisionedManaged
78
+
--sku-capacity 15 \
79
+
--sku-name GlobalProvisionedManaged
81
80
```
82
81
83
82
### Quota
@@ -132,7 +131,7 @@ If an acceptable region isn't available to support the desire model, version and
132
131
133
132
### Determining the number of PTUs needed for a workload
134
133
135
-
PTUs represent an amount of model processing capacity. Similar to your computer or databases, different workloads or requests to the model will consume different amounts of underlying processing capacity. The conversion from call shape characteristics (prompt size, generation size and call rate) to PTUs is complex and nonlinear. To simplify this process, you can use the [Azure OpenAI Capacity calculator](https://oai.azure.com/portal/calculator) to size specific workload shapes.
134
+
PTUs represent an amount of model processing capacity. Similar to your computer or databases, different workloads or requests to the model will consume different amounts of underlying processing capacity. The conversion from throughput needs to PTUs can be approximated using historical token usage data or call shape estimations (input tokens, output tokens, and requests per minute) as outlined in our [performance and latency](../how-to/latency.md) documentation. To simplify this process, you can use the [Azure OpenAI Capacity calculator](https://oai.azure.com/portal/calculator) to size specific workload shapes.
136
135
137
136
A few high-level considerations:
138
137
- Generations require more capacity than prompts
@@ -165,16 +164,16 @@ For provisioned deployments, we use a variation of the leaky bucket algorithm to
165
164
1. When a request is made:
166
165
167
166
a. When the current utilization is above 100%, the service returns a 429 code with the `retry-after-ms` header set to the time until utilization is below 100%
167
+
168
+
b. Otherwise, the service estimates the incremental change to utilization required to serve the request by combining the prompt tokens, less any cacehd tokens, and the specified `max_tokens` in the call. A customer can receive up to a 100% discount on their prompt tokens depending on the size of their cached tokens. If the `max_tokens` parameter is not specified, the service estimates a value. This estimation can lead to lower concurrency than expected when the number of actual generated tokens is small. For highest concurrency, ensure that the `max_tokens` value is as close as possible to the true generation size.
169
+
170
+
1. When a request finishes, we now know the actual compute cost for the call. To ensure an accurate accounting, we correct the utilization using the following logic:
168
171
169
-
b. Otherwise, the service estimates the incremental change to utilization required to serve the request by combining prompt tokens and the specified `max_tokens` in the call. For requests that include at least 1024 cached tokens, the cached tokens are subtracted from the prompt token value. A customer can receive up to a 100% discount on their prompt tokens depending on the size of their cached tokens. If the `max_tokens` parameter is not specified, the service estimates a value. This estimation can lead to lower concurrency than expected when the number of actual generated tokens is small. For highest concurrency, ensure that the `max_tokens` value is as close as possible to the true generation size.
170
-
171
-
1. When a request finishes, we now know the actual compute cost for the call. To ensure an accurate accounting, we correct the utilization using the following logic:
172
-
173
-
a. If the actual > estimated, then the difference is added to the deployment's utilization.
174
-
175
-
b. If the actual < estimated, then the difference is subtracted.
176
-
177
-
1. The overall utilization is decremented down at a continuous rate based on the number of PTUs deployed.
172
+
a. If the actual > estimated, then the difference is added to the deployment's utilization.
173
+
174
+
b. If the actual < estimated, then the difference is subtracted.
175
+
176
+
1. The overall utilization is decremented down at a continuous rate based on the number of PTUs deployed.
178
177
179
178
> [!NOTE]
180
179
> Calls are accepted until utilization reaches 100%. Bursts just over 100% may be permitted in short periods, but over time, your traffic is capped at 100% utilization.
@@ -184,12 +183,30 @@ For provisioned deployments, we use a variation of the leaky bucket algorithm to
184
183
185
184
#### How many concurrent calls can I have on my deployment?
186
185
187
-
The number of concurrent calls you can achieve depends on each call's shape (prompt size, max_token parameter, etc.). The service continues to accept calls until the utilization reach 100%. To determine the approximate number of concurrent calls, you can model out the maximum requests per minute for a particular call shape in the [capacity calculator](https://oai.azure.com/portal/calculator). If the system generates less than the number of samplings tokens like max_token, it will accept more requests.
186
+
The number of concurrent calls you can achieve depends on each call's shape (prompt size, `max_tokens` parameter, etc.). The service continues to accept calls until the utilization reaches 100%. To determine the approximate number of concurrent calls, you can model out the maximum requests per minute for a particular call shape in the [capacity calculator](https://oai.azure.com/portal/calculator). If the system generates less than the number of output tokens set for the `max_tokens` parameter, then the provisioned deployment will accept more requests.
188
187
189
188
## What models and regions are available for provisioned throughput?
0 commit comments