Skip to content

Commit ce1373b

Browse files
Merge pull request #6141 from aahill/july-fixes
adding parameter note
2 parents 456d8de + f78f03c commit ce1373b

File tree

1 file changed

+5
-2
lines changed

1 file changed

+5
-2
lines changed

articles/ai-foundry/openai/how-to/latency.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn about performance and latency with Azure OpenAI
55
manager: nitinme
66
ms.service: azure-ai-openai
77
ms.topic: how-to
8-
ms.date: 07/02/2025
8+
ms.date: 07/21/2025
99
author: mrbullwinkle
1010
ms.author: mbullwin
1111
recommendations: false
@@ -90,13 +90,16 @@ Latency varies based on what model you're using. For an identical request, expec
9090

9191
When you send a completion request to the Azure OpenAI endpoint, your input text is converted to tokens that are then sent to your deployed model. The model receives the input tokens and then begins generating a response. It's an iterative sequential process, one token at a time. Another way to think of it is like a for loop with `n tokens = n iterations`. For most models, generating the response is the slowest step in the process.
9292

93-
At the time of the request, the requested generation size (max_tokens parameter) is used as an initial estimate of the generation size. The compute-time for generating the full size is reserved by the model as the request is processed. Once the generation is completed, the remaining quota is released. Ways to reduce the number of tokens:
93+
At the time of the request, the requested generation size (`max_tokens` parameter) is used as an initial estimate of the generation size. The compute-time for generating the full size is reserved by the model as the request is processed. Once the generation is completed, the remaining quota is released. Ways to reduce the number of tokens:
9494
- Set the `max_tokens` parameter on each call as small as possible.
9595
- Include stop sequences to prevent generating extra content.
9696
- Generate fewer responses: The best_of & n parameters can greatly increase latency because they generate multiple outputs. For the fastest response, either don't specify these values or set them to 1.
9797

9898
In summary, reducing the number of tokens generated per request reduces the latency of each request.
9999

100+
> [!NOTE]
101+
> `max_tokens` only changes the length of a response and in some cases might truncate it. The parameter doesn't change the quality of the response.
102+
100103
### Streaming
101104
Setting `stream: true` in a request makes the service return tokens as soon as they're available, instead of waiting for the full sequence of tokens to be generated. It doesn't change the time to get all the tokens, but it reduces the time for first response. This approach provides a better user experience since end-users can read the response as it is generated.
102105

0 commit comments

Comments
 (0)