Skip to content

Commit fa93f1e

Browse files
Update articles/ai-services/openai/concepts/provisioned-throughput.md
Co-authored-by: Michael <[email protected]>
1 parent a72dcd1 commit fa93f1e

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/ai-services/openai/concepts/provisioned-throughput.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ The amount of throughput (tokens per minute or TPM) a deployment gets per PTU is
4848

4949
To help with simplifying the sizing effort, the following table outlines the TPM per PTU for the specified models. To understand the impact of output tokens on the TPM per PTU limit, use the 3 input token to 1 output token ratio. For a detailed understanding of how different ratios of input and output tokens impact the throughput your workload needs, see the [Azure OpenAI capacity calculator](https://oai.azure.com/portal/calculator). The table also shows Service Level Agreement (SLA) Latency Target Values per model. For more information about the SLA for Azure OpenAI Service, see the [Service Level Agreements (SLA) for Online Services page](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1)
5050

51-
|Topic| **gpt-4o** | **gpt-4o-mini** | **o1**
51+
|Topic| **gpt-4o** | **gpt-4o-mini** | **o1**|
5252
| --- | --- | --- | --- |
5353
|Global & data zone provisioned minimum deployment|15|15|15|
5454
|Global & data zone provisioned scale increment|5|5|5|

0 commit comments

Comments
 (0)