You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,11 @@
2
2
3
3
All notable changes to this project will be documented in this file.
4
4
5
+
## [0.1.0] - May 2, 2024
6
+
7
+
- Add `count_tokens_for_system_and_tools` to count tokens for system message and tools. You should count the tokens for both together, since the token count for tools varies based off whether a system message is provided.
8
+
- Updated `build_messages` to allow for `tools` and `tool_choice` to be passed in.
9
+
5
10
## [0.0.6] - April 24, 2024
6
11
7
12
- Add keyword argument `fallback_to_default` to `build_messages` function to allow for defaulting to the CL100k token encoder and minimum GPT token limit if the model is not found.
Copy file name to clipboardExpand all lines: README.md
+9-6Lines changed: 9 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,11 +31,14 @@ Arguments:
31
31
32
32
*`model` (`str`): The model name to use for token calculation, like gpt-3.5-turbo.
33
33
*`system_prompt` (`str`): The initial system prompt message.
34
-
*`new_user_message` (`str | List[openai.types.chat.ChatCompletionContentPartParam]`): The new user message to append.
35
-
*`past_messages` (`list[dict]`): The list of past messages in the conversation.
36
-
*`few_shots` (`list[dict]`): A few-shot list of messages to insert after the system prompt.
37
-
*`max_tokens` (`int`): The maximum number of tokens allowed for the conversation.
38
-
*`fallback_to_default` (`bool`): Whether to fallback to default model/token limits if model is not found. Defaults to `False`.
34
+
*`tools` (`List[openai.types.chat.ChatCompletionToolParam]`): (Optional) The tools that will be used in the conversation. These won't be part of the final returned messages, but they will be used to calculate the token count.
35
+
*`tool_choice` (`str | dict`): (Optional) The tool choice that will be used in the conversation. This won't be part of the final returned messages, but it will be used to calculate the token count.
36
+
*`new_user_content` (`str | List[openai.types.chat.ChatCompletionContentPartParam]`): (Optional) The content of new user message to append.
37
+
*`past_messages` (`list[dict]`): (Optional) The list of past messages in the conversation.
38
+
*`few_shots` (`list[dict]`): (Optional) A few-shot list of messages to insert after the system prompt.
39
+
*`max_tokens` (`int`): (Optional) The maximum number of tokens allowed for the conversation.
40
+
*`fallback_to_default` (`bool`): (Optional) Whether to fallback to default model/token limits if model is not found. Defaults to `False`.
41
+
39
42
40
43
Returns:
41
44
@@ -49,7 +52,7 @@ from openai_messages_token_helper import build_messages
0 commit comments