Skip to content

Commit c0240e1

Browse files
authored
Copy OpenAI Agent samples from openai/openai-agents-python (#565)
1 parent 15ed958 commit c0240e1

File tree

109 files changed

+7715
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

109 files changed

+7715
-0
lines changed
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Make the examples directory into a package to avoid top-level module name collisions.
2+
# This is needed so that mypy treats files like examples/customer_service/main.py and
3+
# examples/researcher_app/main.py as distinct modules rather than both named "main".
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
# Common agentic patterns
2+
3+
This folder contains examples of different common patterns for agents.
4+
5+
## Deterministic flows
6+
7+
A common tactic is to break down a task into a series of smaller steps. Each task can be performed by an agent, and the output of one agent is used as input to the next. For example, if your task was to generate a story, you could break it down into the following steps:
8+
9+
1. Generate an outline
10+
2. Generate the story
11+
3. Generate the ending
12+
13+
Each of these steps can be performed by an agent. The output of one agent is used as input to the next.
14+
15+
See the [`deterministic.py`](./deterministic.py) file for an example of this.
16+
17+
## Handoffs and routing
18+
19+
In many situations, you have specialized sub-agents that handle specific tasks. You can use handoffs to route the task to the right agent.
20+
21+
For example, you might have a frontline agent that receives a request, and then hands off to a specialized agent based on the language of the request.
22+
See the [`routing.py`](./routing.py) file for an example of this.
23+
24+
## Agents as tools
25+
26+
The mental model for handoffs is that the new agent "takes over". It sees the previous conversation history, and owns the conversation from that point onwards. However, this is not the only way to use agents. You can also use agents as a tool - the tool agent goes off and runs on its own, and then returns the result to the original agent.
27+
28+
For example, you could model the translation task above as tool calls instead: rather than handing over to the language-specific agent, you could call the agent as a tool, and then use the result in the next step. This enables things like translating multiple languages at once.
29+
30+
See the [`agents_as_tools.py`](./agents_as_tools.py) file for an example of this.
31+
32+
## LLM-as-a-judge
33+
34+
LLMs can often improve the quality of their output if given feedback. A common pattern is to generate a response using a model, and then use a second model to provide feedback. You can even use a small model for the initial generation and a larger model for the feedback, to optimize cost.
35+
36+
For example, you could use an LLM to generate an outline for a story, and then use a second LLM to evaluate the outline and provide feedback. You can then use the feedback to improve the outline, and repeat until the LLM is satisfied with the outline.
37+
38+
See the [`llm_as_a_judge.py`](./llm_as_a_judge.py) file for an example of this.
39+
40+
## Parallelization
41+
42+
Running multiple agents in parallel is a common pattern. This can be useful for both latency (e.g. if you have multiple steps that don't depend on each other) and also for other reasons e.g. generating multiple responses and picking the best one.
43+
44+
See the [`parallelization.py`](./parallelization.py) file for an example of this. It runs a translation agent multiple times in parallel, and then picks the best translation.
45+
46+
## Guardrails
47+
48+
Related to parallelization, you often want to run input guardrails to make sure the inputs to your agents are valid. For example, if you have a customer support agent, you might want to make sure that the user isn't trying to ask for help with a math problem.
49+
50+
You can definitely do this without any special Agents SDK features by using parallelization, but we support a special guardrail primitive. Guardrails can have a "tripwire" - if the tripwire is triggered, the agent execution will immediately stop and a `GuardrailTripwireTriggered` exception will be raised.
51+
52+
This is really useful for latency: for example, you might have a very fast model that runs the guardrail and a slow model that runs the actual agent. You wouldn't want to wait for the slow model to finish, so guardrails let you quickly reject invalid inputs.
53+
54+
See the [`input_guardrails.py`](./input_guardrails.py) and [`output_guardrails.py`](./output_guardrails.py) files for examples.
Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
import asyncio
2+
3+
from agents import Agent, ItemHelpers, MessageOutputItem, Runner, trace
4+
5+
"""
6+
This example shows the agents-as-tools pattern. The frontline agent receives a user message and
7+
then picks which agents to call, as tools. In this case, it picks from a set of translation
8+
agents.
9+
"""
10+
11+
spanish_agent = Agent(
12+
name="spanish_agent",
13+
instructions="You translate the user's message to Spanish",
14+
handoff_description="An english to spanish translator",
15+
)
16+
17+
french_agent = Agent(
18+
name="french_agent",
19+
instructions="You translate the user's message to French",
20+
handoff_description="An english to french translator",
21+
)
22+
23+
italian_agent = Agent(
24+
name="italian_agent",
25+
instructions="You translate the user's message to Italian",
26+
handoff_description="An english to italian translator",
27+
)
28+
29+
orchestrator_agent = Agent(
30+
name="orchestrator_agent",
31+
instructions=(
32+
"You are a translation agent. You use the tools given to you to translate."
33+
"If asked for multiple translations, you call the relevant tools in order."
34+
"You never translate on your own, you always use the provided tools."
35+
),
36+
tools=[
37+
spanish_agent.as_tool(
38+
tool_name="translate_to_spanish",
39+
tool_description="Translate the user's message to Spanish",
40+
),
41+
french_agent.as_tool(
42+
tool_name="translate_to_french",
43+
tool_description="Translate the user's message to French",
44+
),
45+
italian_agent.as_tool(
46+
tool_name="translate_to_italian",
47+
tool_description="Translate the user's message to Italian",
48+
),
49+
],
50+
)
51+
52+
synthesizer_agent = Agent(
53+
name="synthesizer_agent",
54+
instructions="You inspect translations, correct them if needed, and produce a final concatenated response.",
55+
)
56+
57+
58+
async def main():
59+
msg = input("Hi! What would you like translated, and to which languages? ")
60+
61+
# Run the entire orchestration in a single trace
62+
with trace("Orchestrator evaluator"):
63+
orchestrator_result = await Runner.run(orchestrator_agent, msg)
64+
65+
for item in orchestrator_result.new_items:
66+
if isinstance(item, MessageOutputItem):
67+
text = ItemHelpers.text_message_output(item)
68+
if text:
69+
print(f" - Translation step: {text}")
70+
71+
synthesizer_result = await Runner.run(
72+
synthesizer_agent, orchestrator_result.to_input_list()
73+
)
74+
75+
print(f"\n\nFinal response:\n{synthesizer_result.final_output}")
76+
77+
78+
if __name__ == "__main__":
79+
asyncio.run(main())
Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
import asyncio
2+
3+
from pydantic import BaseModel
4+
5+
from agents import Agent, Runner, trace
6+
7+
"""
8+
This example demonstrates a deterministic flow, where each step is performed by an agent.
9+
1. The first agent generates a story outline
10+
2. We feed the outline into the second agent
11+
3. The second agent checks if the outline is good quality and if it is a scifi story
12+
4. If the outline is not good quality or not a scifi story, we stop here
13+
5. If the outline is good quality and a scifi story, we feed the outline into the third agent
14+
6. The third agent writes the story
15+
"""
16+
17+
story_outline_agent = Agent(
18+
name="story_outline_agent",
19+
instructions="Generate a very short story outline based on the user's input.",
20+
)
21+
22+
23+
class OutlineCheckerOutput(BaseModel):
24+
good_quality: bool
25+
is_scifi: bool
26+
27+
28+
outline_checker_agent = Agent(
29+
name="outline_checker_agent",
30+
instructions="Read the given story outline, and judge the quality. Also, determine if it is a scifi story.",
31+
output_type=OutlineCheckerOutput,
32+
)
33+
34+
story_agent = Agent(
35+
name="story_agent",
36+
instructions="Write a short story based on the given outline.",
37+
output_type=str,
38+
)
39+
40+
41+
async def main():
42+
input_prompt = input("What kind of story do you want? ")
43+
44+
# Ensure the entire workflow is a single trace
45+
with trace("Deterministic story flow"):
46+
# 1. Generate an outline
47+
outline_result = await Runner.run(
48+
story_outline_agent,
49+
input_prompt,
50+
)
51+
print("Outline generated")
52+
53+
# 2. Check the outline
54+
outline_checker_result = await Runner.run(
55+
outline_checker_agent,
56+
outline_result.final_output,
57+
)
58+
59+
# 3. Add a gate to stop if the outline is not good quality or not a scifi story
60+
assert isinstance(outline_checker_result.final_output, OutlineCheckerOutput)
61+
if not outline_checker_result.final_output.good_quality:
62+
print("Outline is not good quality, so we stop here.")
63+
exit(0)
64+
65+
if not outline_checker_result.final_output.is_scifi:
66+
print("Outline is not a scifi story, so we stop here.")
67+
exit(0)
68+
69+
print("Outline is good quality and a scifi story, so we continue to write the story.")
70+
71+
# 4. Write the story
72+
story_result = await Runner.run(
73+
story_agent,
74+
outline_result.final_output,
75+
)
76+
print(f"Story: {story_result.final_output}")
77+
78+
79+
if __name__ == "__main__":
80+
asyncio.run(main())
Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
from __future__ import annotations
2+
3+
import asyncio
4+
from typing import Any, Literal
5+
6+
from pydantic import BaseModel
7+
8+
from agents import (
9+
Agent,
10+
FunctionToolResult,
11+
ModelSettings,
12+
RunContextWrapper,
13+
Runner,
14+
ToolsToFinalOutputFunction,
15+
ToolsToFinalOutputResult,
16+
function_tool,
17+
)
18+
19+
"""
20+
This example shows how to force the agent to use a tool. It uses `ModelSettings(tool_choice="required")`
21+
to force the agent to use any tool.
22+
23+
You can run it with 3 options:
24+
1. `default`: The default behavior, which is to send the tool output to the LLM. In this case,
25+
`tool_choice` is not set, because otherwise it would result in an infinite loop - the LLM would
26+
call the tool, the tool would run and send the results to the LLM, and that would repeat
27+
(because the model is forced to use a tool every time.)
28+
2. `first_tool_result`: The first tool result is used as the final output.
29+
3. `custom`: A custom tool use behavior function is used. The custom function receives all the tool
30+
results, and chooses to use the first tool result to generate the final output.
31+
32+
Usage:
33+
python examples/agent_patterns/forcing_tool_use.py -t default
34+
python examples/agent_patterns/forcing_tool_use.py -t first_tool
35+
python examples/agent_patterns/forcing_tool_use.py -t custom
36+
"""
37+
38+
39+
class Weather(BaseModel):
40+
city: str
41+
temperature_range: str
42+
conditions: str
43+
44+
45+
@function_tool
46+
def get_weather(city: str) -> Weather:
47+
print("[debug] get_weather called")
48+
return Weather(city=city, temperature_range="14-20C", conditions="Sunny with wind")
49+
50+
51+
async def custom_tool_use_behavior(
52+
context: RunContextWrapper[Any], results: list[FunctionToolResult]
53+
) -> ToolsToFinalOutputResult:
54+
weather: Weather = results[0].output
55+
return ToolsToFinalOutputResult(
56+
is_final_output=True, final_output=f"{weather.city} is {weather.conditions}."
57+
)
58+
59+
60+
async def main(tool_use_behavior: Literal["default", "first_tool", "custom"] = "default"):
61+
if tool_use_behavior == "default":
62+
behavior: Literal["run_llm_again", "stop_on_first_tool"] | ToolsToFinalOutputFunction = (
63+
"run_llm_again"
64+
)
65+
elif tool_use_behavior == "first_tool":
66+
behavior = "stop_on_first_tool"
67+
elif tool_use_behavior == "custom":
68+
behavior = custom_tool_use_behavior
69+
70+
agent = Agent(
71+
name="Weather agent",
72+
instructions="You are a helpful agent.",
73+
tools=[get_weather],
74+
tool_use_behavior=behavior,
75+
model_settings=ModelSettings(
76+
tool_choice="required" if tool_use_behavior != "default" else None
77+
),
78+
)
79+
80+
result = await Runner.run(agent, input="What's the weather in Tokyo?")
81+
print(result.final_output)
82+
83+
84+
if __name__ == "__main__":
85+
import argparse
86+
87+
parser = argparse.ArgumentParser()
88+
parser.add_argument(
89+
"-t",
90+
"--tool-use-behavior",
91+
type=str,
92+
required=True,
93+
choices=["default", "first_tool", "custom"],
94+
help="The behavior to use for tool use. Default will cause tool outputs to be sent to the model. "
95+
"first_tool_result will cause the first tool result to be used as the final output. "
96+
"custom will use a custom tool use behavior function.",
97+
)
98+
args = parser.parse_args()
99+
asyncio.run(main(args.tool_use_behavior))

0 commit comments

Comments
 (0)