Skip to content

Commit c309ea7

Browse files
authored
Merge branch 'openai:main' into main
2 parents 4d56548 + e3b4856 commit c309ea7

File tree

111 files changed

+3430
-917
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

111 files changed

+3430
-917
lines changed

README.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -157,6 +157,10 @@ The Agents SDK is designed to be highly flexible, allowing you to model a wide r
157157

158158
The Agents SDK automatically traces your agent runs, making it easy to track and debug the behavior of your agents. Tracing is extensible by design, supporting custom spans and a wide variety of external destinations, including [Logfire](https://logfire.pydantic.dev/docs/integrations/llms/openai/#openai-agents), [AgentOps](https://docs.agentops.ai/v1/integrations/agentssdk), [Braintrust](https://braintrust.dev/docs/guides/traces/integrations#openai-agents-sdk), [Scorecard](https://docs.scorecard.io/docs/documentation/features/tracing#openai-agents-sdk-integration), and [Keywords AI](https://docs.keywordsai.co/integration/development-frameworks/openai-agent). For more details about how to customize or disable tracing, see [Tracing](http://openai.github.io/openai-agents-python/tracing), which also includes a larger list of [external tracing processors](http://openai.github.io/openai-agents-python/tracing/#external-tracing-processors-list).
159159

160+
## Long running agents & human-in-the-loop
161+
162+
You can use the Agents SDK [Temporal](https://temporal.io/) integration to run durable, long-running workflows, including human-in-the-loop tasks. View a demo of Temporal and the Agents SDK working in action to complete long-running tasks [in this video](https://www.youtube.com/watch?v=fFBZqzT4DD8), and [view docs here](https://github.com/temporalio/sdk-python/tree/main/temporalio/contrib/openai_agents).
163+
160164
## Sessions
161165

162166
The Agents SDK provides built-in session memory to automatically maintain conversation history across multiple agent runs, eliminating the need to manually handle `.to_input_list()` between turns.
@@ -299,6 +303,7 @@ make format-check # run style checker
299303
We'd like to acknowledge the excellent work of the open-source community, especially:
300304

301305
- [Pydantic](https://docs.pydantic.dev/latest/) (data validation) and [PydanticAI](https://ai.pydantic.dev/) (advanced agent framework)
306+
- [LiteLLM](https://github.com/BerriAI/litellm) (unified interface for 100+ LLMs)
302307
- [MkDocs](https://github.com/squidfunk/mkdocs-material)
303308
- [Griffe](https://github.com/mkdocstrings/griffe)
304309
- [uv](https://github.com/astral-sh/uv) and [ruff](https://github.com/astral-sh/ruff)

docs/agents.md

Lines changed: 98 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ from agents import Agent, ModelSettings, function_tool
1616

1717
@function_tool
1818
def get_weather(city: str) -> str:
19+
"""returns weather info for the specified city."""
1920
return f"The weather in {city} is sunny"
2021

2122
agent = Agent(
@@ -33,6 +34,7 @@ Agents are generic on their `context` type. Context is a dependency-injection to
3334
```python
3435
@dataclass
3536
class UserContext:
37+
name: str
3638
uid: str
3739
is_pro_user: bool
3840

@@ -141,8 +143,103 @@ Supplying a list of tools doesn't always mean the LLM will use a tool. You can f
141143
3. `none`, which requires the LLM to _not_ use a tool.
142144
4. Setting a specific string e.g. `my_tool`, which requires the LLM to use that specific tool.
143145

146+
```python
147+
from agents import Agent, Runner, function_tool, ModelSettings
148+
149+
@function_tool
150+
def get_weather(city: str) -> str:
151+
"""Returns weather info for the specified city."""
152+
return f"The weather in {city} is sunny"
153+
154+
agent = Agent(
155+
name="Weather Agent",
156+
instructions="Retrieve weather details.",
157+
tools=[get_weather],
158+
model_settings=ModelSettings(tool_choice="get_weather")
159+
)
160+
```
161+
162+
## Tool Use Behavior
163+
164+
The `tool_use_behavior` parameter in the `Agent` configuration controls how tool outputs are handled:
165+
- `"run_llm_again"`: The default. Tools are run, and the LLM processes the results to produce a final response.
166+
- `"stop_on_first_tool"`: The output of the first tool call is used as the final response, without further LLM processing.
167+
168+
```python
169+
from agents import Agent, Runner, function_tool, ModelSettings
170+
171+
@function_tool
172+
def get_weather(city: str) -> str:
173+
"""Returns weather info for the specified city."""
174+
return f"The weather in {city} is sunny"
175+
176+
agent = Agent(
177+
name="Weather Agent",
178+
instructions="Retrieve weather details.",
179+
tools=[get_weather],
180+
tool_use_behavior="stop_on_first_tool"
181+
)
182+
```
183+
184+
- `StopAtTools(stop_at_tool_names=[...])`: Stops if any specified tool is called, using its output as the final response.
185+
```python
186+
from agents import Agent, Runner, function_tool
187+
from agents.agent import StopAtTools
188+
189+
@function_tool
190+
def get_weather(city: str) -> str:
191+
"""Returns weather info for the specified city."""
192+
return f"The weather in {city} is sunny"
193+
194+
@function_tool
195+
def sum_numbers(a: int, b: int) -> int:
196+
"""Adds two numbers."""
197+
return a + b
198+
199+
agent = Agent(
200+
name="Stop At Stock Agent",
201+
instructions="Get weather or sum numbers.",
202+
tools=[get_weather, sum_numbers],
203+
tool_use_behavior=StopAtTools(stop_at_tool_names=["get_weather"])
204+
)
205+
```
206+
- `ToolsToFinalOutputFunction`: A custom function that processes tool results and decides whether to stop or continue with the LLM.
207+
208+
```python
209+
from agents import Agent, Runner, function_tool, FunctionToolResult, RunContextWrapper
210+
from agents.agent import ToolsToFinalOutputResult
211+
from typing import List, Any
212+
213+
@function_tool
214+
def get_weather(city: str) -> str:
215+
"""Returns weather info for the specified city."""
216+
return f"The weather in {city} is sunny"
217+
218+
def custom_tool_handler(
219+
context: RunContextWrapper[Any],
220+
tool_results: List[FunctionToolResult]
221+
) -> ToolsToFinalOutputResult:
222+
"""Processes tool results to decide final output."""
223+
for result in tool_results:
224+
if result.output and "sunny" in result.output:
225+
return ToolsToFinalOutputResult(
226+
is_final_output=True,
227+
final_output=f"Final weather: {result.output}"
228+
)
229+
return ToolsToFinalOutputResult(
230+
is_final_output=False,
231+
final_output=None
232+
)
233+
234+
agent = Agent(
235+
name="Weather Agent",
236+
instructions="Retrieve weather details.",
237+
tools=[get_weather],
238+
tool_use_behavior=custom_tool_handler
239+
)
240+
```
241+
144242
!!! note
145243

146244
To prevent infinite loops, the framework automatically resets `tool_choice` to "auto" after a tool call. This behavior is configurable via [`agent.reset_tool_choice`][agents.agent.Agent.reset_tool_choice]. The infinite loop is because tool results are sent to the LLM, which then generates another tool call because of `tool_choice`, ad infinitum.
147245

148-
If you want the Agent to completely stop after a tool call (rather than continuing with auto mode), you can set [`Agent.tool_use_behavior="stop_on_first_tool"`] which will directly use the tool output as the final response without further LLM processing.

docs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ The [OpenAI Agents SDK](https://github.com/openai/openai-agents-python) enables
44

55
- **Agents**, which are LLMs equipped with instructions and tools
66
- **Handoffs**, which allow agents to delegate to other agents for specific tasks
7-
- **Guardrails**, which enable the inputs to agents to be validated
7+
- **Guardrails**, which enable validation of agent inputs and outputs
88
- **Sessions**, which automatically maintains conversation history across agent runs
99

1010
In combination with Python, these primitives are powerful enough to express complex relationships between tools and agents, and allow you to build real-world applications without a steep learning curve. In addition, the SDK comes with built-in **tracing** that lets you visualize and debug your agentic flows, as well as evaluate them and even fine-tune models for your application.

0 commit comments

Comments
 (0)