Skip to content

Commit 48aefee

Browse files
Updated notebook based on PR feedback
1 parent 99a1f84 commit 48aefee

File tree

1 file changed

+32
-17
lines changed

1 file changed

+32
-17
lines changed

examples/reasoning_function_calls.ipynb

Lines changed: 32 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,8 @@
55
"metadata": {},
66
"source": [
77
"# Managing Function Calls With Reasoning Models\n",
8-
"OpenAI now offers [reasoning models](https://platform.openai.com/docs/guides/reasoning?api-mode=responses) which are trained to follow logical chains of thought, making them better suited for complex or multi-step tasks.\n",
9-
"> \"_Reasoning models like o3 and o4-mini are LLMs trained with reinforcement learning to perform reasoning. Reasoning models think before they answer, producing a long internal chain of thought before responding to the user. Reasoning models excel in complex problem solving, coding, scientific reasoning, and multi-step planning for agentic workflows. They're also the best models for Codex CLI, our lightweight coding agent._\"\n",
8+
"OpenAI now offers function calling using [reasoning models](https://platform.openai.com/docs/guides/reasoning?api-mode=responses). Reasoning models are trained to follow logical chains of thought, making them better suited for complex or multi-step tasks.\n",
9+
"> _Reasoning models like o3 and o4-mini are LLMs trained with reinforcement learning to perform reasoning. Reasoning models think before they answer, producing a long internal chain of thought before responding to the user. Reasoning models excel in complex problem solving, coding, scientific reasoning, and multi-step planning for agentic workflows. They're also the best models for Codex CLI, our lightweight coding agent._\n",
1010
"\n",
1111
"For the most part, using these models via the API is very simple and comparable to using familiar classic 'chat' models. \n",
1212
"\n",
@@ -30,11 +30,12 @@
3030
"source": [
3131
"# pip install openai\n",
3232
"# Import libraries \n",
33-
"import json, openai\n",
33+
"import json\n",
34+
"from openai import OpenAI\n",
3435
"from uuid import uuid4\n",
3536
"from typing import Callable\n",
3637
"\n",
37-
"client = openai.OpenAI()\n",
38+
"client = OpenAI()\n",
3839
"MODEL_DEFAULTS = {\n",
3940
" \"model\": \"o4-mini\", # 200,000 token context window\n",
4041
" \"reasoning\": {\"effort\": \"low\", \"summary\": \"auto\"}, # Automatically summarise the reasoning process. Can also choose \"detailed\" or \"none\"\n",
@@ -65,10 +66,17 @@
6566
}
6667
],
6768
"source": [
68-
"# Let's keep track of the response ids in a naive way, in case we want to reverse the conversation and pick up from a previous point\n",
69-
"response = client.responses.create(input=\"Which of the last four Olympic host cities has the highest average temperature?\", **MODEL_DEFAULTS)\n",
69+
"response = client.responses.create(\n",
70+
" input=\"Which of the last four Olympic host cities has the highest average temperature?\",\n",
71+
" **MODEL_DEFAULTS\n",
72+
")\n",
7073
"print(response.output_text)\n",
71-
"response = client.responses.create(input=\"what about the lowest?\", previous_response_id=response.id, **MODEL_DEFAULTS)\n",
74+
"\n",
75+
"response = client.responses.create(\n",
76+
" input=\"what about the lowest?\",\n",
77+
" previous_response_id=response.id,\n",
78+
" **MODEL_DEFAULTS\n",
79+
")\n",
7280
"print(response.output_text)"
7381
]
7482
},
@@ -397,8 +405,8 @@
397405
"## Manual conversation orchestration\n",
398406
"So far so good! It's really cool to watch the model pause execution to run a function before continuing. \n",
399407
"In practice the example above is quite trivial, and production use cases may be much more complex:\n",
400-
"* Our context window may grow too large and we may wish to prune older and less relevant messages\n",
401-
"* We may not wish to proceed sequentially using the `previous_response_id` but allow users to navigate back and forth through the conversation and re-generate answers\n",
408+
"* Our context window may grow too large and we may wish to prune older and less relevant messages, or summarize the conversation so far\n",
409+
"* We may wish to allow users to navigate back and forth through the conversation and re-generate answers\n",
402410
"* We may wish to store messages in our own database for audit purposes rather than relying on OpenAI's storage and orchestration\n",
403411
"* etc.\n",
404412
"\n",
@@ -526,15 +534,22 @@
526534
"metadata": {},
527535
"source": [
528536
"## Summary\n",
529-
"* Reasoning models can invoke custom functions during their reasoning process, allowing for complex workflows that require external data or operations.\n",
530-
"* These models may require multiple function calls in series, as some steps depend on the results of previous ones, necessitating a loop to handle ongoing reasoning.\n",
531-
"* It's essential to preserve reasoning and function call responses in the conversation history to maintain the chain-of-thought and avoid errors in the reasoning process.\n"
537+
"In this cookbook, we identified how to combine function calling with OpenAI's reasoning models to demonstrate multi-step tasks that are dependent on external data sources. \n",
538+
"\n",
539+
"Importantly, we covered reasoning-model specific nuances in the function calling process, specifically that:\n",
540+
"* The model may choose to make multiple function calls or reasoning steps in series, and some steps may depend on the results of previous ones\n",
541+
"* We cannot know how many of these steps there will be, so we must process responses with a loop\n",
542+
"* The responses API makes orchestration easy using the `previous_response_id` parameter, but where manual control is needed, it's important to maintain the correct order of conversation item to preserve the 'chain-of-thought'\n",
543+
"\n",
544+
"---\n",
545+
"\n",
546+
"The examples used here are rather simple, but you can imagine how this technique could be extended to more real-world use cases, such as:\n",
547+
"\n",
548+
"* Looking up a customer's transaction history and recent correspondence to determine if they are eligible for a promotional offer\n",
549+
"* Calling recent transaction logs, geolocation data, and device metadata to assess the likelihood of a transaction being fraudulent\n",
550+
"* Reviewing internal HR databases to fetch an employee’s benefits usage, tenure, and recent policy changes to answer personalized HR questions\n",
551+
"* Reading internal dashboards, competitor news feeds, and market analyses to compile a daily executive briefing tailored to their focus areas"
532552
]
533-
},
534-
{
535-
"cell_type": "markdown",
536-
"metadata": {},
537-
"source": []
538553
}
539554
],
540555
"metadata": {

0 commit comments

Comments
 (0)