Skip to content

Commit 4e54f2b

Browse files
Merge pull request #259227 from mrbullwinkle/mrb_11_21_2023_latency_vnext
[Azure OpenAI] Latency
2 parents 7fb12bb + 0ba0d00 commit 4e54f2b

File tree

2 files changed

+71
-0
lines changed

2 files changed

+71
-0
lines changed
Lines changed: 69 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
---
2+
title: Azure OpenAI Service performance & latency
3+
titleSuffix: Azure OpenAI
4+
description: Learn about performance and latency with Azure OpenAI
5+
manager: nitinme
6+
ms.service: azure-ai-openai
7+
ms.topic: how-to
8+
ms.date: 11/21/2023
9+
author: mrbullwinkle
10+
ms.author: mbullwin
11+
recommendations: false
12+
ms.custom:
13+
---
14+
15+
# Performance and latency
16+
17+
This article will provide you with background around how latency works with Azure OpenAI and how to optimize your environment to improve performance.
18+
19+
## What is latency?
20+
21+
The high level definition of latency in this context is the amount of time it takes to get a response back from the model. For completion and chat completion requests, latency is largely dependent on model type as well as the number of tokens generated and returned. The number of tokens sent to the model as part of the input token limit, has a much smaller overall impact on latency.
22+
23+
## Improve performance
24+
25+
### Model selection
26+
27+
Latency varies based on what model you are using. For an identical request, it is expected that different models will have a different latency. If your use case requires the lowest latency models with the fastest response times we recommend the latest models in the [GPT-3.5 Turbo model series](../concepts/models.md#gpt-35-models).
28+
29+
### Max tokens
30+
31+
When you send a completion request to the Azure OpenAI endpoint your input text is converted to tokens which are then sent to your deployed model. The model receives the input tokens and then begins generating a response. It's an iterative sequential process, one token at a time. Another way to think of it is like a for loop with `n tokens = n iterations`.
32+
33+
So another important factor when evaluating latency is how many tokens are being generated. This is controlled largely via the `max_tokens` parameter. Reducing the number of tokens generated per request will reduce the latency of each request.
34+
35+
### Streaming
36+
37+
**Examples of when to use streaming**:
38+
39+
Chat bots and conversational interfaces.
40+
41+
Streaming impacts perceived latency. If you have streaming enabled you'll receive tokens back in chunks as soon as they're available. From a user perspective, this often feels like the model is responding faster even though the overall time to complete the request remains the same.
42+
43+
**Examples of when streaming is less important**:
44+
45+
Sentiment analysis, language translation, content generation.
46+
47+
There are many use cases where you are performing some bulk task where you only care about the finished result, not the real-time response. If streaming is disabled, you won't receive any tokens until the model has finished the entire response.
48+
49+
### Content filtering
50+
51+
Azure OpenAI includes a [content filtering system](./content-filters.md) that works alongside the core models. This system works by running both the prompt and completion through an ensemble of classification models aimed at detecting and preventing the output of harmful content.
52+
53+
The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
54+
55+
The addition of content filtering comes with an increase in safety, but also latency. There are many applications where this tradeoff in performance is necessary, however there are certain lower risk use cases where disabling the content filters to improve performance might be worth exploring.
56+
57+
Learn more about requesting modifications to the default, [content filtering policies](./content-filters.md).
58+
59+
## Summary
60+
61+
* **Model latency**: If model latency is important to you we recommend trying out our latest models in the [GPT-3.5 Turbo model series](../concepts/models.md).
62+
63+
* **Lower max tokens**: OpenAI has found that even in cases where the total number of tokens generated is similar the request with the higher value set for the max token parameter will have more latency.
64+
65+
* **Lower total tokens generated**: The fewer tokens generated the faster the overall response will be. Remember this is like having a for loop with `n tokens = n iterations`. Lower the number of tokens generated and overall response time will improve accordingly.
66+
67+
* **Streaming**: Enabling streaming can be useful in managing user expectations in certain situations by allowing the user to see the model response as it is being generated rather than having to wait until the last token is ready.
68+
69+
* **Content Filtering** improves safety, but it also impacts latency. Evaluate if any of your workloads would benefit from [modified content filtering policies](./content-filters.md).

articles/ai-services/openai/toc.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -116,6 +116,8 @@ items:
116116
href: ./how-to/monitoring.md
117117
- name: Plan and manage costs
118118
href: ./how-to/manage-costs.md
119+
- name: Performance & latency
120+
href: ./how-to/latency.md
119121
- name: Role-based access control (Azure RBAC)
120122
href: ./how-to/role-based-access-control.md
121123
- name: Business continuity & disaster recovery (BCDR)

0 commit comments

Comments
 (0)