Skip to content

Commit 8f4dbe4

Browse files
committed
Learn Editor: Update latency.md
1 parent 25dbe07 commit 8f4dbe4

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/ai-services/openai/how-to/latency.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ Assuming all requests for a given workload are uniform, the prompt tokens and co
6363

6464
Once system level throughput has been estimated for a given workload, these estimates can be used to size Standard and Provisioned deployments. For Standard deployments, the input and output TPM values can be combined to estimate the total TPM to be assigned to a given deployment. For Provisioned deployments, the request token usage data (for the dedicated capacity calculator experience) or input and output TPM values (for the deployment capacity calculator experience) can be used to estimate the number of PTUs required to support a given workload.
6565

66-
Here are a few examples for GPT-4o mini model:
66+
Here are a few examples for the GPT-4o mini model:
6767

6868
| Prompt Size (tokens) |Generation size (tokens) |Requests per minute |Input TPM|Output TPM|Total TPM|PTUs required |
6969
|--|--|--| -------- | -------- | -------- |--|

0 commit comments

Comments
 (0)