Skip to content

Commit 85bd2bf

Browse files
committed
Learn Editor: Update provisioned-throughput.md
1 parent 065dfb2 commit 85bd2bf

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

articles/ai-services/openai/concepts/provisioned-throughput.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,13 +41,13 @@ An Azure OpenAI Deployment is a unit of management for a specific OpenAI Model.
4141
## How much throughput per PTU you get for each model
4242
The amount of throughput (tokens per minute or TPM) a deployment gets per PTU is a function of the input and output tokens in the minute. Generating output tokens requires more processing than input tokens and so the more output tokens generated the lower your overall TPM. The service dynamically balances the input & output costs, so users do not have to set specific input and output limits. This approach means your deployment is resilient to fluctuations in the workload shape.
4343

44-
To help with simplifying the sizing effort, the following table outlines the TPM per PTU for the `gpt-4o` and `gpt-4o-mini` models. The table also shows Service Level Agreement (SLA) Latency Target Values per model. For more information about the SLA for Azure OpenAI Service, see the [Service Level Agreements (SLA) for Online Services page].(https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1)
44+
To help with simplifying the sizing effort, the following table outlines the TPM per PTU for the `gpt-4o` and `gpt-4o-mini` models which represent the max all the traffic is either input or output. The table also shows Service Level Agreement (SLA) Latency Target Values per model. For more information about the SLA for Azure OpenAI Service, see the [Service Level Agreements (SLA) for Online Services page].(https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1)
4545

4646
| | **gpt-4o**, **2024-05-13** & **gpt-4o**, **2024-08-06** | **gpt-4o-mini**, **2024-07-18** |
4747
| --- | --- | --- |
4848
| Deployable Increments | 50 | 25|
49-
| Input TPM per PTU | 2,500 | 37,000 |
50-
| Output TPM per PTU| 833|12,333|
49+
|Max Input TPM per PTU | 2,500 | 37,000 |
50+
|Max Output TPM per PTU| 833|12,333|
5151
| Latency Target Value |25 Tokens Per Second* |33 Tokens Per Second*|
5252

5353
For a full list see the [AOAI Studio calculator](https://oai.azure.com/portal/calculator).

0 commit comments

Comments
 (0)