You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -157,6 +157,10 @@ The Agents SDK is designed to be highly flexible, allowing you to model a wide r
157
157
158
158
The Agents SDK automatically traces your agent runs, making it easy to track and debug the behavior of your agents. Tracing is extensible by design, supporting custom spans and a wide variety of external destinations, including [Logfire](https://logfire.pydantic.dev/docs/integrations/llms/openai/#openai-agents), [AgentOps](https://docs.agentops.ai/v1/integrations/agentssdk), [Braintrust](https://braintrust.dev/docs/guides/traces/integrations#openai-agents-sdk), [Scorecard](https://docs.scorecard.io/docs/documentation/features/tracing#openai-agents-sdk-integration), and [Keywords AI](https://docs.keywordsai.co/integration/development-frameworks/openai-agent). For more details about how to customize or disable tracing, see [Tracing](http://openai.github.io/openai-agents-python/tracing), which also includes a larger list of [external tracing processors](http://openai.github.io/openai-agents-python/tracing/#external-tracing-processors-list).
159
159
160
+
## Long running agents & human-in-the-loop
161
+
162
+
You can use the Agents SDK [Temporal](https://temporal.io/) integration to run durable, long-running workflows, including human-in-the-loop tasks. View a demo of Temporal and the Agents SDK working in action to complete long-running tasks [in this video](https://www.youtube.com/watch?v=fFBZqzT4DD8), and [view docs here](https://github.com/temporalio/sdk-python/tree/main/temporalio/contrib/openai_agents).
163
+
160
164
## Sessions
161
165
162
166
The Agents SDK provides built-in session memory to automatically maintain conversation history across multiple agent runs, eliminating the need to manually handle `.to_input_list()` between turns.
@@ -299,6 +303,7 @@ make format-check # run style checker
299
303
We'd like to acknowledge the excellent work of the open-source community, especially:
300
304
301
305
-[Pydantic](https://docs.pydantic.dev/latest/) (data validation) and [PydanticAI](https://ai.pydantic.dev/) (advanced agent framework)
306
+
-[LiteLLM](https://github.com/BerriAI/litellm) (unified interface for 100+ LLMs)
Copy file name to clipboardExpand all lines: docs/agents.md
+99-2Lines changed: 99 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,7 @@ from agents import Agent, ModelSettings, function_tool
16
16
17
17
@function_tool
18
18
defget_weather(city: str) -> str:
19
+
"""returns weather info for the specified city."""
19
20
returnf"The weather in {city} is sunny"
20
21
21
22
agent = Agent(
@@ -33,6 +34,7 @@ Agents are generic on their `context` type. Context is a dependency-injection to
33
34
```python
34
35
@dataclass
35
36
classUserContext:
37
+
name: str
36
38
uid: str
37
39
is_pro_user: bool
38
40
@@ -113,7 +115,7 @@ Sometimes, you want to observe the lifecycle of an agent. For example, you may w
113
115
114
116
## Guardrails
115
117
116
-
Guardrails allow you to run checks/validations on user input, in parallel to the agent running. For example, you could screen the user's input for relevance. Read more in the [guardrails](guardrails.md) documentation.
118
+
Guardrails allow you to run checks/validations on user input in parallel to the agent running, and on the agent's output once it is produced. For example, you could screen the user's input and agent's output for relevance. Read more in the [guardrails](guardrails.md) documentation.
117
119
118
120
## Cloning/copying agents
119
121
@@ -141,8 +143,103 @@ Supplying a list of tools doesn't always mean the LLM will use a tool. You can f
141
143
3.`none`, which requires the LLM to _not_ use a tool.
142
144
4. Setting a specific string e.g. `my_tool`, which requires the LLM to use that specific tool.
143
145
146
+
```python
147
+
from agents import Agent, Runner, function_tool, ModelSettings
148
+
149
+
@function_tool
150
+
defget_weather(city: str) -> str:
151
+
"""Returns weather info for the specified city."""
-`ToolsToFinalOutputFunction`: A custom function that processes tool results and decides whether to stop or continue with the LLM.
207
+
208
+
```python
209
+
from agents import Agent, Runner, function_tool, FunctionToolResult, RunContextWrapper
210
+
from agents.agent import ToolsToFinalOutputResult
211
+
from typing import List, Any
212
+
213
+
@function_tool
214
+
defget_weather(city: str) -> str:
215
+
"""Returns weather info for the specified city."""
216
+
returnf"The weather in {city} is sunny"
217
+
218
+
defcustom_tool_handler(
219
+
context: RunContextWrapper[Any],
220
+
tool_results: List[FunctionToolResult]
221
+
) -> ToolsToFinalOutputResult:
222
+
"""Processes tool results to decide final output."""
223
+
for result in tool_results:
224
+
if result.output and"sunny"in result.output:
225
+
return ToolsToFinalOutputResult(
226
+
is_final_output=True,
227
+
final_output=f"Final weather: {result.output}"
228
+
)
229
+
return ToolsToFinalOutputResult(
230
+
is_final_output=False,
231
+
final_output=None
232
+
)
233
+
234
+
agent = Agent(
235
+
name="Weather Agent",
236
+
instructions="Retrieve weather details.",
237
+
tools=[get_weather],
238
+
tool_use_behavior=custom_tool_handler
239
+
)
240
+
```
241
+
144
242
!!! note
145
243
146
244
To prevent infinite loops, the framework automatically resets `tool_choice` to "auto" after a tool call. This behavior is configurable via [`agent.reset_tool_choice`][agents.agent.Agent.reset_tool_choice]. The infinite loop is because tool results are sent to the LLM, which then generates another tool call because of `tool_choice`, ad infinitum.
147
245
148
-
If you want the Agent to completely stop after a tool call (rather than continuing with auto mode), you can set [`Agent.tool_use_behavior="stop_on_first_tool"`] which will directly use the tool output as the final response without further LLM processing.
Copy file name to clipboardExpand all lines: docs/handoffs.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,6 +36,7 @@ The [`handoff()`][agents.handoffs.handoff] function lets you customize things.
36
36
-`on_handoff`: A callback function executed when the handoff is invoked. This is useful for things like kicking off some data fetching as soon as you know a handoff is being invoked. This function receives the agent context, and can optionally also receive LLM generated input. The input data is controlled by the `input_type` param.
37
37
-`input_type`: The type of input expected by the handoff (optional).
38
38
-`input_filter`: This lets you filter the input received by the next agent. See below for more.
39
+
-`is_enabled`: Whether the handoff is enabled. This can be a boolean or a function that returns a boolean, allowing you to dynamically enable or disable the handoff at runtime.
39
40
40
41
```python
41
42
from agents import Agent, handoff, RunContextWrapper
Copy file name to clipboardExpand all lines: docs/index.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ The [OpenAI Agents SDK](https://github.com/openai/openai-agents-python) enables
4
4
5
5
-**Agents**, which are LLMs equipped with instructions and tools
6
6
-**Handoffs**, which allow agents to delegate to other agents for specific tasks
7
-
-**Guardrails**, which enable the inputs to agents to be validated
7
+
-**Guardrails**, which enable validation of agent inputs and outputs
8
8
-**Sessions**, which automatically maintains conversation history across agent runs
9
9
10
10
In combination with Python, these primitives are powerful enough to express complex relationships between tools and agents, and allow you to build real-world applications without a steep learning curve. In addition, the SDK comes with built-in **tracing** that lets you visualize and debug your agentic flows, as well as evaluate them and even fine-tune models for your application.
0 commit comments