Skip to content

Commit d4f198a

Browse files
Merge pull request #267335 from aahill/token-update
token usage estimation
2 parents 76da583 + 9ade4d7 commit d4f198a

File tree

1 file changed

+48
-2
lines changed

1 file changed

+48
-2
lines changed

articles/ai-services/openai/concepts/use-your-data.md

Lines changed: 48 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -420,7 +420,50 @@ When you chat with a model, providing a history of the chat will help the model
420420

421421
## Token usage estimation for Azure OpenAI On Your Data
422422

423+
Azure OpenAI On Your Data Retrieval Augmented Generation (RAG) service that leverages both a search service (such as Azure AI Search) and generation (Azure OpenAI models) to let users get answers for their questions based on provided data.
423424

425+
As part of this RAG pipeline, there are are three steps at a high-level:
426+
427+
1. Reformulate the user query into a list of search intents. This is done by making a call to the model with a prompt that includes instructions, the user question, and conversation history. Let's call this an *intent prompt*.
428+
429+
1. For each intent, multiple document chunks are retrieved from the search service. After filtering out irrelevant chunks based on the user-specified threshold of strictness and reranking/aggregating the chunks based on internal logic, the user-specified number of document chunks are chosen.
430+
431+
1. These document chunks, along with the user question, conversation history, role information, and instructions are sent to the model to generate the final model response. Let's call this the *generation prompt*.
432+
433+
In total, there are two calls made to the model:
434+
435+
* For processing the intent: The token estimate for the *intent prompt* includes those for the user question, conversation history and the instructions sent to the model for intent generation.
436+
437+
* For generating the response: The token estimate for the *generation prompt* includes those for the user question, conversation history, the retrieved list of document chunks, role information and the instructions sent to it for generation.
438+
439+
The model generated output tokens (both intents and response) need to be taken into account for total token estimation. Summing up all the four columns below gives the average total tokens used for generating a response.
440+
441+
| Model | Generation prompt token count | Intent prompt token count | Response token count | Intent token count |
442+
|--|--|--|--|--|
443+
| gpt-35-turbo-16k | 4297 | 1366 | 111 | 25 |
444+
| gpt-4-0613 | 3997 | 1385 | 118 | 18 |
445+
| gpt-4-1106-preview | 4538 | 811 | 119 | 27 |
446+
| gpt-35-turbo-1106 | 4854 | 1372 | 110 | 26 |
447+
448+
The above numbers are based on testing on a data set with:
449+
450+
* 191 conversations
451+
* 250 questions
452+
* 10 average tokens per question
453+
* 4 conversational turns per conversation on average
454+
455+
And the following [parameters](#runtime-parameters).
456+
457+
|Setting |Value |
458+
|---------|---------|
459+
|Number of retrieved documents | 5 |
460+
|Strictness | 3 |
461+
|Chunk size | 1024 |
462+
|Limit responses to ingested data? | True |
463+
464+
These estimates will vary based on the values set for the above parameters. For example, if the number of retrieved documents is set to 10 and strictness is set to 1, the token count will go up. If returned responses aren't limited to the ingested data, there are fewer instructions given to the model and the number of tokens will go down.
465+
466+
The estimates also depend on the nature of the documents and questions being asked. For example, if the questions are open-ended, the responses are likely to be longer. Similarly, a longer system message would contribute to a longer prompt that consumes more tokens, and if the conversation history is long, the prompt will be longer.
424467

425468
| Model | Max tokens for system message | Max tokens for model response |
426469
|--|--|--|
@@ -429,16 +472,18 @@ When you chat with a model, providing a history of the chat will help the model
429472
| GPT-4-0613-8K | 400 | 1500 |
430473
| GPT-4-0613-32K | 2000 | 6400 |
431474

432-
The table above shows the total number of tokens available for each model type. It also determines the maximum number of tokens that can be used for the [system message](#system-message) and the model response. Additionally, the following also consume tokens:
475+
The table above shows the maximum number of tokens that can be used for the [system message](#system-message) and the model response. Additionally, the following also consume tokens:
433476

434477

435478

436-
* The meta prompt (MP): if you limit responses from the model to the grounding data content (`inScope=True` in the API), the maximum number of tokens is 4,036 tokens. Otherwise (for example if `inScope=False`) the maximum is 3,444 tokens. This number is variable depending on the token length of the user question and conversation history. This estimate includes the base prompt and the query rewriting prompts for retrieval.
479+
* The meta prompt: if you limit responses from the model to the grounding data content (`inScope=True` in the API), the maximum number of tokens higher. Otherwise (for example if `inScope=False`) the maximum is lower. This number is variable depending on the token length of the user question and conversation history. This estimate includes the base prompt and the query rewriting prompts for retrieval.
437480
* User question and history: Variable but capped at 2,000 tokens.
438481
* Retrieved documents (chunks): The number of tokens used by the retrieved document chunks depends on multiple factors. The upper bound for this is the number of retrieved document chunks multiplied by the chunk size. It will, however, be truncated based on the tokens available tokens for the specific model being used after counting the rest of fields.
439482

440483
20% of the available tokens are reserved for the model response. The remaining 80% of available tokens include the meta prompt, the user question and conversation history, and the system message. The remaining token budget is used by the retrieved document chunks.
441484

485+
In order to compute the number of tokens consumed by your input (such as your question, the system message/role information), use the following code sample.
486+
442487
```python
443488
import tiktoken
444489

@@ -452,6 +497,7 @@ class TokenEstimator(object):
452497
token_output = TokenEstimator.estimate_tokens(input_text)
453498
```
454499

500+
455501
## Troubleshooting
456502

457503
### Failed ingestion jobs

0 commit comments

Comments
 (0)