Skip to content

Commit 36176b0

Browse files
Merge pull request #5262 from aahill/ptu-hotfix
updating models
2 parents 247ff0c + 1dd9018 commit 36176b0

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

articles/ai-services/openai/how-to/provisioned-throughput-onboarding.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Understanding costs associated with provisioned throughput units (PTU)
33
description: Learn about provisioned throughput costs and billing in Azure OpenAI.
44
ms.service: azure-ai-openai
55
ms.topic: conceptual
6-
ms.date: 05/20/2025
6+
ms.date: 05/28/2025
77
manager: nitinme
88
author: aahill
99
ms.author: aahi
@@ -77,14 +77,14 @@ The amount of throughput (measured in tokens per minute or TPM) a deployment get
7777

7878
For example, for `gpt-4.1:2025-04-14`, 1 output token counts as 4 input tokens towards your utilization limit which matches the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/). Older models use a different ratio and for a deeper understanding on how different ratios of input and output tokens impact the throughput your workload needs, see the [Azure OpenAI capacity calculator](https://ai.azure.com/resource/calculator).
7979

80-
|Topic| **gpt-4.1** | **gpt-4.1-mini** | **gpt-4.1-nano** | **o3** | **o3-mini** | **o1** | **gpt-4o** | **gpt-4o-mini** |
81-
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
82-
|Global & data zone provisioned minimum deployment|15|15| 15 | 15 |15|15|15|15|
83-
|Global & data zone provisioned scale increment|5|5| 5 | 5 |5|5|5|5|
84-
|Regional provisioned minimum deployment|50|25| 25 |50 | 25|25|50|25|
85-
|Regional provisioned scale increment|50|25| 25 | 50 | 25|50|50|25|
86-
|Input TPM per PTU|3,000|14,900| 59,400 | 600 | 2,500|230|2,500|37,000|
87-
|Latency Target Value|44 Tokens Per Second|50 Tokens Per Second| 50 Tokens Per Second | 40 Tokens Per Second | 66 Tokens Per Second |25 Tokens Per Second|25 Tokens Per Second|33 Tokens Per Second|
80+
|Topic| **o4-mini** | **gpt-4.1** | **gpt-4.1-mini** | **gpt-4.1-nano** | **o3** | **o3-mini** | **o1** | **gpt-4o** | **gpt-4o-mini** |
81+
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
82+
|Global & data zone provisioned minimum deployment| 15 | 15|15| 15 | 15 |15|15|15|15|
83+
|Global & data zone provisioned scale increment| 5 | 5|5| 5 | 5 |5|5|5|5|
84+
|Regional provisioned minimum deployment|25| 50|25| 25 |50 | 25|25|50|25|
85+
|Regional provisioned scale increment|25| 50|25| 25 | 50 | 25|50|50|25|
86+
|Input TPM per PTU|5,400 | 3,000|14,900| 59,400 | 600 | 2,500|230|2,500|37,000|
87+
|Latency Target Value| 66 Tokens Per Second | 40 Tokens Per Second|50 Tokens Per Second| 60 Tokens Per Second | 40 Tokens Per Second | 66 Tokens Per Second |25 Tokens Per Second|25 Tokens Per Second|33 Tokens Per Second|
8888

8989

9090
For a full list, see the [Azure OpenAI in Azure AI Foundry Models in Azure AI Foundry portal calculator](https://ai.azure.com/resource/calculator).

0 commit comments

Comments
 (0)