Skip to content
Closed
Show file tree
Hide file tree
Changes from 13 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
97 changes: 96 additions & 1 deletion docs/agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,8 +143,103 @@ Supplying a list of tools doesn't always mean the LLM will use a tool. You can f
3. `none`, which requires the LLM to _not_ use a tool.
4. Setting a specific string e.g. `my_tool`, which requires the LLM to use that specific tool.

```python
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you remove this change?

from agents import Agent, Runner, function_tool, ModelSettings

@function_tool
def get_weather(city: str) -> str:
"""Returns weather info for the specified city."""
return f"The weather in {city} is sunny"

agent = Agent(
name="Weather Agent",
instructions="Retrieve weather details.",
tools=[get_weather],
model_settings=ModelSettings(tool_choice="get_weather")
)
```

## Tool Use Behavior

The `tool_use_behavior` parameter in the `Agent` configuration controls how tool outputs are handled:
- `"run_llm_again"`: The default. Tools are run, and the LLM processes the results to produce a final response.
- `"stop_on_first_tool"`: The output of the first tool call is used as the final response, without further LLM processing.

```python
from agents import Agent, Runner, function_tool, ModelSettings

@function_tool
def get_weather(city: str) -> str:
"""Returns weather info for the specified city."""
return f"The weather in {city} is sunny"

agent = Agent(
name="Weather Agent",
instructions="Retrieve weather details.",
tools=[get_weather],
tool_use_behavior="stop_on_first_tool"
)
```

- `StopAtTools(stop_at_tool_names=[...])`: Stops if any specified tool is called, using its output as the final response.
```python
from agents import Agent, Runner, function_tool
from agents.agent import StopAtTools

@function_tool
def get_weather(city: str) -> str:
"""Returns weather info for the specified city."""
return f"The weather in {city} is sunny"

@function_tool
def sum_numbers(a: int, b: int) -> int:
"""Adds two numbers."""
return a + b

agent = Agent(
name="Stop At Stock Agent",
instructions="Get weather or sum numbers.",
tools=[get_weather, sum_numbers],
tool_use_behavior=StopAtTools(stop_at_tool_names=["get_weather"])
)
```
- `ToolsToFinalOutputFunction`: A custom function that processes tool results and decides whether to stop or continue with the LLM.

```python
from agents import Agent, Runner, function_tool, FunctionToolResult, RunContextWrapper
from agents.agent import ToolsToFinalOutputResult
from typing import List, Any

@function_tool
def get_weather(city: str) -> str:
"""Returns weather info for the specified city."""
return f"The weather in {city} is sunny"

def custom_tool_handler(
context: RunContextWrapper[Any],
tool_results: List[FunctionToolResult]
) -> ToolsToFinalOutputResult:
"""Processes tool results to decide final output."""
for result in tool_results:
if result.output and "sunny" in result.output:
return ToolsToFinalOutputResult(
is_final_output=True,
final_output=f"Final weather: {result.output}"
)
return ToolsToFinalOutputResult(
is_final_output=False,
final_output=None
)

agent = Agent(
name="Weather Agent",
instructions="Retrieve weather details.",
tools=[get_weather],
tool_use_behavior=custom_tool_handler
)
```

!!! note

To prevent infinite loops, the framework automatically resets `tool_choice` to "auto" after a tool call. This behavior is configurable via [`agent.reset_tool_choice`][agents.agent.Agent.reset_tool_choice]. The infinite loop is because tool results are sent to the LLM, which then generates another tool call because of `tool_choice`, ad infinitum.

If you want the Agent to completely stop after a tool call (rather than continuing with auto mode), you can set [`Agent.tool_use_behavior="stop_on_first_tool"`] which will directly use the tool output as the final response without further LLM processing.
53 changes: 53 additions & 0 deletions examples/exceptions/agents_exception.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
from __future__ import annotations

import asyncio

from agents import Agent, Runner, function_tool
from agents.exceptions import AgentsException

"""
This example demonstrates the use of the OpenAI Agents SDK with tools and comprehensive error handling.
The agent, 'Triage Agent', is configured to handle two tasks:
- Fetching weather information for a specified city using the `get_weather` tool.
- Adding two numbers using the `sum_numbers` tool.
The agent is instructed to use only one tool per execution cycle and can switch to another tool in subsequent cycles.
The example sets a `max_turns=1` limit to intentionally restrict the agent to a single turn, which may trigger a `MaxTurnsExceeded` error.
All exceptions are caught via `AgentsException`, the base class for SDK errors.
"""

# Define tools

@function_tool
async def get_weather(city: str) -> str:
"""Returns weather info for the specified city."""
return f"The weather in {city} is sunny."


@function_tool
async def sum_numbers(a: int, b: int) -> str:
"""Adds two numbers."""
result = a + b
return f"The sum of {a} and {b} is {result}."


agent = Agent(
name="Triage Agent",
instructions="Get weather or sum numbers. Use only one tool per turn.",
tools=[get_weather, sum_numbers],
)


async def main():
try:
user_input = input("Enter a message: ")
result = await Runner.run(agent, user_input, max_turns=1)
print("✅ Final Output:", result.final_output)
except AgentsException as e:
print(f"❌ Caught {e.__class__.__name__}: {e}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code snippet does not work like being mentioned here. also, in the first place, I don't see the necessity to have this one.



if __name__ == "__main__":
asyncio.run(main())
64 changes: 64 additions & 0 deletions examples/exceptions/input_guardrail_tripwire_triggered.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
from __future__ import annotations
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have mostly the same code in docs: https://openai.github.io/openai-agents-python/guardrails/


import asyncio

from pydantic import BaseModel

from agents import (
Agent,
GuardrailFunctionOutput,
InputGuardrailTripwireTriggered,
Runner,
RunContextWrapper,
input_guardrail,
)

"""
This example demonstrates an OpenAI Agents SDK agent with an input guardrail to block math homework queries.

The 'CustomerSupportAgent' processes user queries provided as direct string inputs in an interactive loop. A guardrail, implemented via 'GuardrailAgent' and a Pydantic model (`MathHomeworkOutput`), checks if the input is a math homework question. If detected, the guardrail raises `InputGuardrailTripwireTriggered`, triggering a refusal message ("Sorry, I can't help with math homework."). Otherwise, the agent responds to the query. The loop continues to prompt for new inputs, handling each independently.
"""


class MathHomeworkOutput(BaseModel):
is_math_homework: bool


guardrail_agent = Agent(
name="GuardrailAgent",
instructions="Check if the input is a math homework question.",
output_type=MathHomeworkOutput,
)


@input_guardrail
def my_input_guardrail(
context: RunContextWrapper[Any],
agent: Agent[Any],
inputs: str | list[Any],
) -> GuardrailFunctionOutput:
result = await Runner.run(guardrail_agent, input)
output = result.final_output_as(MathHomeworkOutput)
return GuardrailFunctionOutput(
output_info=output,
tripwire_triggered=output.is_math_homework,
)


async def main():
agent = Agent(
name="CustomerSupportAgent",
instructions="Answer user queries.",
input_guardrails=[math_guardrail],
)

user_input = "What is 2 + 2"
try:
result = await Runner.run(agent, user_input)
print(result.final_output)
except InputGuardrailTripwireTriggered:
print("InputGuardrailTripwireTriggered, I can't help with math homework.")


if __name__ == "__main__":
asyncio.run(main())
43 changes: 43 additions & 0 deletions examples/exceptions/max_turns_exceeded.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
from __future__ import annotations

import asyncio

from agents import Agent, Runner, function_tool
from agents.exceptions import MaxTurnsExceeded

"""
This example demonstrates an OpenAI Agents SDK agent that triggers a MaxTurnsExceeded error.
The 'TriageAgent' handles user queries using tools for fetching weather (`get_weather`) or adding numbers (`sum_numbers`). The instructions direct the agent to process both tasks in a single turn, but with `max_turns=1`, this causes a `MaxTurnsExceeded` error. The interactive loop processes user queries as direct string inputs, catching and displaying the `MaxTurnsExceeded` error message.
"""

@function_tool
def get_weather(city: str) -> str:
"""Returns weather info for the specified city."""
return f"The weather in {city} is sunny"


@function_tool
def sum_numbers(a: int, b: int) -> int:
"""Adds two numbers."""
return a + b


async def main():
agent = Agent(
name="TriageAgent",
instructions="Process both get_weather and sum_numbers in a single turn when asked for both.",
tools=[sum_numbers, get_weather],
)

user_input = "What is US Weather and sum 2 + 2."
try:
result = await Runner.run(agent, user_input, max_turns=1)
print(result.final_output)
except MaxTurnsExceeded as e:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This pattern is clearly mentioned at https://openai.github.io/openai-agents-python/running_agents/#the-agent-loop; so I don't think this code snippet is necessary.

print(f"Caught MaxTurnsExceeded: {e}")



if __name__ == "__main__":
asyncio.run(main())
37 changes: 37 additions & 0 deletions examples/exceptions/model_behavior_error.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
from __future__ import annotations

import asyncio
from typing import Literal

from pydantic import BaseModel

from agents import Agent, Runner
from agents.exceptions import ModelBehaviorError

"""
This example demonstrates an OpenAI Agents SDK agent that triggers a ModelBehaviorError due to invalid model output.
The 'MiniErrorBot' agent uses a Pydantic model (`Output`) requiring a `value` field with the literal 'EXPECTED_VALUE'. The instructions tell the model to return 'Hello', causing a `ModelBehaviorError` when the output fails validation. The interactive loop processes user queries as direct string inputs, catching and displaying the `ModelBehaviorError` message.
"""

class Output(BaseModel):
value: Literal["EXPECTED_VALUE"]


async def main():
agent = Agent(
name="MiniErrorBot",
instructions="Just say: Hello",
output_type=Output,
)

user_input = "hello"
try:
result = await Runner.run(agent, user_input)
print(result.final_output)
except ModelBehaviorError as e:
print(f"ModelBehaviorError: {e}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not raised here, plus to me this document is good enough: https://openai.github.io/openai-agents-python/running_agents/#exceptions



if __name__ == "__main__":
asyncio.run(main())
62 changes: 62 additions & 0 deletions examples/exceptions/output_guardrail_tripwire_triggered.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
from __future__ import annotations

import asyncio

from pydantic import BaseModel

from agents import (
Agent,
GuardrailFunctionOutput,
OutputGuardrailTripwireTriggered,
Runner,
output_guardrail,
)

"""
This example demonstrates an OpenAI Agents SDK agent with an output guardrail to block math homework responses.
The 'Assistant' agent processes user queries provided as direct string inputs in an interactive loop. An output guardrail, using a Pydantic model (`MathHomeworkOutput`) and a guardrail agent, checks if the response is a math homework answer. If detected, the guardrail raises `OutputGuardrailTripwireTriggered`, and a refusal message is printed. The loop continues to prompt for new inputs, handling each independently.
"""


class MathHomeworkOutput(BaseModel):
is_math_homework: bool


guardrail_agent = Agent(
name="GuardrailAgent",
instructions="Check if the output is a math homework answer.",
output_type=MathHomeworkOutput,
)


@output_guardrail
async def math_guardrail(context, agent: Agent, output: str) -> GuardrailFunctionOutput:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Math guardrail example does not make sense for output ones, plus we already have this example: https://github.com/openai/openai-agents-python/blob/main/examples/agent_patterns/output_guardrails.py

result = await Runner.run(guardrail_agent, output)
output_data = result.final_output_as(MathHomeworkOutput)
return GuardrailFunctionOutput(
output_info=output_data,
tripwire_triggered=output_data.is_math_homework,
)


async def main():
agent = Agent(
name="Assistant",
instructions="Answer user queries.",
output_guardrails=[math_guardrail],
)

user_input = "What is 2 + 2"

try:
result = await Runner.run(agent, user_input)
print(result.final_output)
except OutputGuardrailTripwireTriggered:
print(
"OutputGuardrailTripwireTriggered, I can't provide math homework answers."
)


if __name__ == "__main__":
asyncio.run(main())
Loading
Loading