You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -376,7 +376,7 @@ If the model decides to call a tool it will define a `recipient` in the header o
376
376
The model might also specify a `<|constrain|>` token to indicate the type of input for the tool call. In this case since it’s being passed in as JSON the `<|constrain|>` is set to `json`.
377
377
378
378
```
379
-
<|channel|>analysis<|message|>Need to use function get_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|>
379
+
<|channel|>analysis<|message|>Need to use function get_current_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_current_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|>
380
380
```
381
381
382
382
#### Handling tool calls
@@ -392,7 +392,7 @@ A tool message has the following format:
} // namespace functions<|end|><|start|>user<|message|>What is the weather like in SF?<|end|><|start|>assistant<|channel|>analysis<|message|>Need to use function get_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|><|start|>functions.get_weather to=assistant<|channel|>commentary<|message|>{"sunny": true, "temperature": 20}<|end|><|start|>assistant
435
+
} // namespace functions<|end|><|start|>user<|message|>What is the weather like in SF?<|end|><|start|>assistant<|channel|>analysis<|message|>Need to use function get_current_weather.<|end|><|start|>assistant<|channel|>commentary to=functions.get_current_weather <|constrain|>json<|message|>{"location":"San Francisco"}<|call|><|start|>functions.get_current_weather to=assistant<|channel|>commentary<|message|>{"sunny": true, "temperature": 20}<|end|><|start|>assistant
436
436
```
437
437
438
-
As you can see above we are passing not just the function out back into the model for further sampling but also the previous chain-of-thought (“Need to use function get_weather.”) to provide the model with the necessary information to continue its chain-of-thought or provide the final answer.
438
+
As you can see above we are passing not just the function out back into the model for further sampling but also the previous chain-of-thought (“Need to use function get_current_weather.”) to provide the model with the necessary information to continue its chain-of-thought or provide the final answer.
"Newer models such as gpt-4o or gpt-3.5-turbo can call multiple functions in one turn."
518
+
"Newer models such as gpt-5, gpt-4.1 or gpt-4o can call multiple functions in one turn."
519
519
]
520
520
},
521
521
{
@@ -758,7 +758,7 @@
758
758
"source": [
759
759
"##### Steps to invoke a function call using Chat Completions API: \n",
760
760
"\n",
761
-
"**Step 1**: Prompt the model with content that may result in model selecting a tool to use. The description of the tools such as a function names and signature is defined in the 'Tools' list and passed to the model in API call. If selected, the function name and parameters are included in the response.<br>\n",
761
+
"**Step 1**: Prompt the model with content that may result in model selecting a tool to use. The description of the tools such as a function name and signature is defined in the 'Tools' list and passed to the model in API call. If selected, the function name and parameters are included in the response.<br>\n",
762
762
"\n",
763
763
"**Step 2**: Check programmatically if model wanted to call a function. If true, proceed to step 3. <br> \n",
764
764
"**Step 3**: Extract the function name and parameters from response, call the function with parameters. Append the result to messages. <br> \n",
@@ -767,7 +767,7 @@
767
767
},
768
768
{
769
769
"cell_type": "code",
770
-
"execution_count": 19,
770
+
"execution_count": null,
771
771
"id": "e8b7cb9cdc7a7616",
772
772
"metadata": {
773
773
"ExecuteTime": {
@@ -792,9 +792,9 @@
792
792
"}]\n",
793
793
"\n",
794
794
"response = client.chat.completions.create(\n",
795
-
" model='gpt-4o', \n",
795
+
" model=GPT_MODEL, \n",
796
796
" messages=messages, \n",
797
-
" tools=tools, \n",
797
+
" tools=tools, \n",
798
798
" tool_choice=\"auto\"\n",
799
799
")\n",
800
800
"\n",
@@ -807,7 +807,7 @@
807
807
},
808
808
{
809
809
"cell_type": "code",
810
-
"execution_count": 20,
810
+
"execution_count": null,
811
811
"id": "351c39def3417776",
812
812
"metadata": {
813
813
"ExecuteTime": {
@@ -847,7 +847,7 @@
847
847
" # Step 4: Invoke the chat completions API with the function response appended to the messages list\n",
848
848
" # Note that messages with role 'tool' must be a response to a preceding message with 'tool_calls'\n",
0 commit comments