Skip to content

Commit 80b31e2

Browse files
committed
Learn Editor: Update provisioned-throughput.md
1 parent b2a8256 commit 80b31e2

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

articles/ai-services/openai/concepts/provisioned-throughput.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -153,12 +153,12 @@ In the Provisioned-Managed and Global Provisioned-Managed offerings, each reques
153153
For Provisioned-Managed and Global Provisioned-Managed, we use a variation of the leaky bucket algorithm to maintain utilization below 100% while allowing some burstiness in the traffic. The high-level logic is as follows:
154154

155155
1. Each customer has a set amount of capacity they can utilize on a deployment
156-
2. When a request is made:
156+
1. When a request is made:
157157

158-
a. When the current utilization is above 100%, the service returns a 429 code with the `retry-after-ms` header set to the time until utilization is below 100%
159-
160-
b. Otherwise, the service estimates the incremental change to utilization required to serve the request by combining prompt tokens and the specified `max_tokens` in the call. If the `max_tokens` parameter is not specified, the service estimates a value. This estimation can lead to lower concurrency than expected when the number of actual generated tokens is small. For highest concurrency, ensure that the `max_tokens` value is as close as possible to the true generation size.
158+
a. When the current utilization is above 100%, the service returns a 429 code with the `retry-after-ms` header set to the time until utilization is below 100%
161159

160+
b. Otherwise, the service estimates the incremental change to utilization required to serve the request by combining prompt tokens and the specified `max_tokens` in the call. For requests that include at least 1024 cached tokens, the cached tokens are subtracted from the prompt token value. A customer can receive up to a 100% discount on their prompt tokens depending on the size of their cached tokens. If the `max_tokens` parameter is not specified, the service estimates a value. This estimation can lead to lower concurrency than expected when the number of actual generated tokens is small. For highest concurrency, ensure that the `max_tokens` value is as close as possible to the true generation size.
161+
162162
3. When a request finishes, we now know the actual compute cost for the call. To ensure an accurate accounting, we correct the utilization using the following logic:
163163

164164
a. If the actual > estimated, then the difference is added to the deployment's utilization

0 commit comments

Comments
 (0)