Skip to content

Commit c908a50

Browse files
Re-add demo to show Count tokens (google#296)
* Add `GenerativeModel.count_tokens` demonstration * Format notebook with nbfmt * Update information about tokens * Update python_quickstart.ipynb * Update python_quickstart.ipynb * nbfmt --------- Co-authored-by: Mark McDonald <[email protected]>
1 parent 3cf5a79 commit c908a50

File tree

1 file changed

+58
-0
lines changed

1 file changed

+58
-0
lines changed

site/en/tutorials/python_quickstart.ipynb

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1034,6 +1034,64 @@
10341034
" display(to_markdown(f'**{message.role}**: {message.parts[0].text}'))"
10351035
]
10361036
},
1037+
{
1038+
"cell_type": "markdown",
1039+
"metadata": {
1040+
"id": "AEgVOYu0pAr4"
1041+
},
1042+
"source": [
1043+
"## Count tokens\n",
1044+
"\n",
1045+
"Large language models have a context window, and the context length is often measured in terms of the **number of tokens**. With the Gemini API, you can determine the number of tokens per any `glm.Content` object. In the simplest case, you can pass a query string to the `GenerativeModel.count_tokens` method as follows:"
1046+
]
1047+
},
1048+
{
1049+
"cell_type": "code",
1050+
"execution_count": null,
1051+
"metadata": {
1052+
"id": "eLjBmPCLpElk"
1053+
},
1054+
"outputs": [
1055+
{
1056+
"name": "stdout",
1057+
"output_type": "stream",
1058+
"text": [
1059+
"total_tokens: 7"
1060+
]
1061+
}
1062+
],
1063+
"source": [
1064+
"model.count_tokens(\"What is the meaning of life?\")"
1065+
]
1066+
},
1067+
{
1068+
"cell_type": "markdown",
1069+
"metadata": {
1070+
"id": "oM2_U8pmpHQA"
1071+
},
1072+
"source": [
1073+
"Similarly, you can check `token_count` for your `ChatSession`:"
1074+
]
1075+
},
1076+
{
1077+
"cell_type": "code",
1078+
"execution_count": null,
1079+
"metadata": {
1080+
"id": "i0MUU4BZpG4_"
1081+
},
1082+
"outputs": [
1083+
{
1084+
"name": "stdout",
1085+
"output_type": "stream",
1086+
"text": [
1087+
"total_tokens: 501"
1088+
]
1089+
}
1090+
],
1091+
"source": [
1092+
"model.count_tokens(chat.history)"
1093+
]
1094+
},
10371095
{
10381096
"cell_type": "markdown",
10391097
"metadata": {

0 commit comments

Comments
 (0)