Skip to content

Commit a4f033b

Browse files
Merge pull request #281896 from aahill/sm-changes
updating system message table
2 parents 395fde6 + c0e4b8f commit a4f033b

File tree

1 file changed

+12
-8
lines changed

1 file changed

+12
-8
lines changed

articles/ai-services/openai/concepts/use-your-data.md

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -579,14 +579,18 @@ These estimates will vary based on the values set for the above parameters. For
579579

580580
The estimates also depend on the nature of the documents and questions being asked. For example, if the questions are open-ended, the responses are likely to be longer. Similarly, a longer system message would contribute to a longer prompt that consumes more tokens, and if the conversation history is long, the prompt will be longer.
581581

582-
| Model | Max tokens for system message | Max tokens for model response |
583-
|--|--|--|
584-
| GPT-35-0301 | 400 | 1500 |
585-
| GPT-35-0613-16K | 1000 | 3200 |
586-
| GPT-4-0613-8K | 400 | 1500 |
587-
| GPT-4-0613-32K | 2000 | 6400 |
588-
589-
The table above shows the maximum number of tokens that can be used for the [system message](#system-message) and the model response. Additionally, the following also consume tokens:
582+
| Model | Max tokens for system message |
583+
|--|--|
584+
| GPT-35-0301 | 400 |
585+
| GPT-35-0613-16K | 1000 |
586+
| GPT-4-0613-8K | 400 |
587+
| GPT-4-0613-32K | 2000 |
588+
| GPT-35-turbo-0125 | 2000 |
589+
| GPT-4-turbo-0409 | 4000 |
590+
| GPT-4o | 4000 |
591+
| GPT-4o-mini | 4000 |
592+
593+
The table above shows the maximum number of tokens that can be used for the [system message](#system-message). To see the maximum tokens for the model response, see the [models article](./models.md#gpt-4-and-gpt-4-turbo-models). Additionally, the following also consume tokens:
590594

591595

592596

0 commit comments

Comments
 (0)