Skip to content

Commit 1d5d4be

Browse files
committed
resolve conflict
1 parent a74f9e1 commit 1d5d4be

File tree

1 file changed

+6
-6
lines changed

1 file changed

+6
-6
lines changed

articles/ai-services/openai/concepts/provisioned-throughput.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -163,17 +163,17 @@ For Provisioned-Managed and Global Provisioned-Managed, we use a variation of th
163163
1. Each customer has a set amount of capacity they can utilize on a deployment
164164
1. When a request is made:
165165

166-
a. When the current utilization is above 100%, the service returns a 429 code with the `retry-after-ms` header set to the time until utilization is below 100%
166+
a. When the current utilization is above 100%, the service returns a 429 code with the `retry-after-ms` header set to the time until utilization is below 100%
167167

168-
b. Otherwise, the service estimates the incremental change to utilization required to serve the request by combining prompt tokens and the specified `max_tokens` in the call. For requests that include at least 1024 cached tokens, the cached tokens are subtracted from the prompt token value. A customer can receive up to a 100% discount on their prompt tokens depending on the size of their cached tokens. If the `max_tokens` parameter is not specified, the service estimates a value. This estimation can lead to lower concurrency than expected when the number of actual generated tokens is small. For highest concurrency, ensure that the `max_tokens` value is as close as possible to the true generation size.
168+
b. Otherwise, the service estimates the incremental change to utilization required to serve the request by combining prompt tokens and the specified `max_tokens` in the call. For requests that include at least 1024 cached tokens, the cached tokens are subtracted from the prompt token value. A customer can receive up to a 100% discount on their prompt tokens depending on the size of their cached tokens. If the `max_tokens` parameter is not specified, the service estimates a value. This estimation can lead to lower concurrency than expected when the number of actual generated tokens is small. For highest concurrency, ensure that the `max_tokens` value is as close as possible to the true generation size.
169169

170-
1. When a request finishes, we now know the actual compute cost for the call. To ensure an accurate accounting, we correct the utilization using the following logic:
170+
1. When a request finishes, we now know the actual compute cost for the call. To ensure an accurate accounting, we correct the utilization using the following logic:
171171

172-
a. If the actual > estimated, then the difference is added to the deployment's utilization.
172+
a. If the actual > estimated, then the difference is added to the deployment's utilization.
173173

174-
b. If the actual < estimated, then the difference is subtracted.
174+
b. If the actual < estimated, then the difference is subtracted.
175175

176-
1. The overall utilization is decremented down at a continuous rate based on the number of PTUs deployed.
176+
1. The overall utilization is decremented down at a continuous rate based on the number of PTUs deployed.
177177

178178
> [!NOTE]
179179
> Calls are accepted until utilization reaches 100%. Bursts just over 100% may be permitted in short periods, but over time, your traffic is capped at 100% utilization.

0 commit comments

Comments
 (0)