Skip to content

Commit 6ff90e8

Browse files
Merge pull request #5705 from aahill/ptu-update
updating tpm
2 parents 2b510bd + a1e6526 commit 6ff90e8

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/ai-services/openai/how-to/provisioned-throughput-onboarding.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Understanding costs associated with provisioned throughput units (PTU)
33
description: Learn about provisioned throughput costs and billing in Azure AI Foundry.
44
ms.service: azure-ai-openai
55
ms.topic: conceptual
6-
ms.date: 06/13/2025
6+
ms.date: 06/25/2025
77
manager: nitinme
88
author: aahill
99
ms.author: aahi
@@ -83,7 +83,7 @@ For example, for `gpt-4.1:2025-04-14`, 1 output token counts as 4 input tokens t
8383
|Global & data zone provisioned scale increment| 5 | 5|5| 5 | 5 |5|5|5|5| 100|100|
8484
|Regional provisioned minimum deployment|25| 50|25| 25 |50 | 25|25|50|25| NA|NA|
8585
|Regional provisioned scale increment|25| 50|25| 25 | 50 | 25|50|50|25|NA|NA|
86-
|Input TPM per PTU|5,400 | 3,000|14,900| 59,400 | 600 | 2,500|230|2,500|37,000|4,000|4,000|
86+
|Input TPM per PTU|5,400 | 3,000|14,900| 59,400 | 3,000 | 2,500|230|2,500|37,000|4,000|4,000|
8787
|Latency Target Value| 99% > 66 Tokens Per Second\* | 99% > 40 Tokens Per Second\* | 99% > 50 Tokens Per Second\*| 99% > 60 Tokens Per Second\* | 99% > 40 Tokens Per Second\* | 99% > 66 Tokens Per Second\* | 99% > 25 Tokens Per Second\* | 99% > 25 Tokens Per Second\* | 99% > 33 Tokens Per Second\* | 99% > 50 Tokens Per Second\*| 99% > 50 Tokens Per Second\*|
8888

8989
\* Calculated as the average request latency on a per-minute basis across the month.

0 commit comments

Comments
 (0)