Skip to content

Commit 582235f

Browse files
jssmithtconley1428
andauthored
OpenAI Agents agent pattern examples (#224)
* update for plugins * formatting * reference main branch * cleanup * switch to plugins on the runners * move around samples * update README files * formatting update * formatting * timeout adjustments * porting agent patterns from OpenAI agents examples * Revert uv.lock --------- Co-authored-by: Tim Conley <[email protected]>
1 parent 14e42d6 commit 582235f

18 files changed

+904
-36
lines changed

openai_agents/agent_patterns/README.md

Lines changed: 60 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@ Common agentic patterns extended with Temporal's durable execution capabilities.
44

55
*Adapted from [OpenAI Agents SDK agent patterns](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns)*
66

7+
Before running these examples, be sure to review the [prerequisites and background on the integration](../README.md).
8+
79
## Running the Examples
810

911
First, start the worker (supports all patterns):
@@ -13,56 +15,83 @@ uv run openai_agents/agent_patterns/run_worker.py
1315

1416
Then run individual examples in separate terminals:
1517

16-
## Deterministic Flows
17-
18-
**TODO**
19-
20-
A common tactic is to break down a task into a series of smaller steps. Each task can be performed by an agent, and the output of one agent is used as input to the next. For example, if your task was to generate a story, you could break it down into the following steps:
21-
22-
1. Generate an outline
23-
2. Generate the story
24-
3. Generate the ending
25-
26-
Each of these steps can be performed by an agent. The output of one agent is used as input to the next.
27-
28-
## Handoffs and Routing
29-
30-
**TODO**
31-
32-
In many situations, you have specialized sub-agents that handle specific tasks. You can use handoffs to route the task to the right agent.
18+
### Deterministic Flows
19+
Sequential agent execution with validation gates - demonstrates breaking complex tasks into smaller steps:
20+
```bash
21+
uv run openai_agents/agent_patterns/run_deterministic_workflow.py
22+
```
3323

34-
For example, you might have a frontline agent that receives a request, and then hands off to a specialized agent based on the language of the request.
24+
### Parallelization
25+
Run multiple agents in parallel and select the best result - useful for improving quality or reducing latency:
26+
```bash
27+
uv run openai_agents/agent_patterns/run_parallelization_workflow.py
28+
```
3529

36-
## Agents as Tools
30+
### LLM-as-a-Judge
31+
Iterative improvement using feedback loops - generate content, evaluate it, and improve until satisfied:
32+
```bash
33+
uv run openai_agents/agent_patterns/run_llm_as_a_judge_workflow.py
34+
```
3735

38-
The mental model for handoffs is that the new agent "takes over". It sees the previous conversation history, and owns the conversation from that point onwards. However, this is not the only way to use agents. You can also use agents as a tool - the tool agent goes off and runs on its own, and then returns the result to the original agent.
36+
### Agents as Tools
37+
Use agents as callable tools within other agents - enables composition and specialized task delegation:
38+
```bash
39+
uv run openai_agents/agent_patterns/run_agents_as_tools_workflow.py
40+
```
3941

40-
For example, you could model a translation task as tool calls instead: rather than handing over to the language-specific agent, you could call the agent as a tool, and then use the result in the next step. This enables things like translating multiple languages at once.
42+
### Agent Routing and Handoffs
43+
Route requests to specialized agents based on content analysis (adapted for non-streaming):
44+
```bash
45+
uv run openai_agents/agent_patterns/run_routing_workflow.py
46+
```
4147

48+
### Input Guardrails
49+
Pre-execution validation to prevent unwanted requests - demonstrates safety mechanisms:
4250
```bash
43-
uv run openai_agents/agent_patterns/run_agents_as_tools_workflow.py
51+
uv run openai_agents/agent_patterns/run_input_guardrails_workflow.py
4452
```
4553

46-
## LLM-as-a-Judge
54+
### Output Guardrails
55+
Post-execution validation to detect sensitive content - ensures safe responses:
56+
```bash
57+
uv run openai_agents/agent_patterns/run_output_guardrails_workflow.py
58+
```
4759

48-
**TODO**
60+
### Forcing Tool Use
61+
Control tool execution strategies - choose between different approaches to tool usage:
62+
```bash
63+
uv run openai_agents/agent_patterns/run_forcing_tool_use_workflow.py
64+
```
4965

50-
LLMs can often improve the quality of their output if given feedback. A common pattern is to generate a response using a model, and then use a second model to provide feedback. You can even use a small model for the initial generation and a larger model for the feedback, to optimize cost.
66+
## Pattern Details
5167

52-
For example, you could use an LLM to generate an outline for a story, and then use a second LLM to evaluate the outline and provide feedback. You can then use the feedback to improve the outline, and repeat until the LLM is satisfied with the outline.
68+
### Deterministic Flows
69+
A common tactic is to break down a task into a series of smaller steps. Each task can be performed by an agent, and the output of one agent is used as input to the next. For example, if your task was to generate a story, you could break it down into the following steps:
5370

54-
## Parallelization
71+
1. Generate an outline
72+
2. Check outline quality and genre
73+
3. Write the story (only if outline passes validation)
5574

56-
**TODO**
75+
Each of these steps can be performed by an agent. The output of one agent is used as input to the next.
5776

77+
### Parallelization
5878
Running multiple agents in parallel is a common pattern. This can be useful for both latency (e.g. if you have multiple steps that don't depend on each other) and also for other reasons e.g. generating multiple responses and picking the best one.
5979

60-
## Guardrails
80+
### LLM-as-a-Judge
81+
LLMs can often improve the quality of their output if given feedback. A common pattern is to generate a response using a model, and then use a second model to provide feedback. You can even use a small model for the initial generation and a larger model for the feedback, to optimize cost.
6182

62-
**TODO**
83+
### Agents as Tools
84+
The mental model for handoffs is that the new agent "takes over". It sees the previous conversation history, and owns the conversation from that point onwards. However, this is not the only way to use agents. You can also use agents as a tool - the tool agent goes off and runs on its own, and then returns the result to the original agent.
6385

86+
### Guardrails
6487
Related to parallelization, you often want to run input guardrails to make sure the inputs to your agents are valid. For example, if you have a customer support agent, you might want to make sure that the user isn't trying to ask for help with a math problem.
6588

6689
You can definitely do this without any special Agents SDK features by using parallelization, but we support a special guardrail primitive. Guardrails can have a "tripwire" - if the tripwire is triggered, the agent execution will immediately stop and a `GuardrailTripwireTriggered` exception will be raised.
6790

68-
This is really useful for latency: for example, you might have a very fast model that runs the guardrail and a slow model that runs the actual agent. You wouldn't want to wait for the slow model to finish, so guardrails let you quickly reject invalid inputs.
91+
This is really useful for latency: for example, you might have a very fast model that runs the guardrail and a slow model that runs the actual agent. You wouldn't want to wait for the slow model to finish, so guardrails let you quickly reject invalid inputs.
92+
93+
## Omitted Examples
94+
95+
The following patterns from the [reference repository](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns) are not included in this Temporal adaptation:
96+
97+
- **Streaming Guardrails**: Requires streaming capabilities which are not yet available in the Temporal integration

openai_agents/agent_patterns/run_agents_as_tools_workflow.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,9 +20,9 @@ async def main():
2020
# Execute a workflow
2121
result = await client.execute_workflow(
2222
AgentsAsToolsWorkflow.run,
23-
"Translate to English: '¿Cómo estás?'",
24-
id="my-workflow-id",
25-
task_queue="openai-agents-task-queue",
23+
"Please translate 'Good morning, how are you?' to Spanish and French",
24+
id="agents-as-tools-workflow-example",
25+
task_queue="openai-agents-patterns-task-queue",
2626
)
2727

2828
print(f"Result: {result}")
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
import asyncio
2+
3+
from temporalio.client import Client
4+
from temporalio.contrib.openai_agents import OpenAIAgentsPlugin
5+
6+
from openai_agents.agent_patterns.workflows.deterministic_workflow import (
7+
DeterministicWorkflow,
8+
)
9+
10+
11+
async def main():
12+
# Create client connected to server at the given address
13+
client = await Client.connect(
14+
"localhost:7233",
15+
plugins=[
16+
OpenAIAgentsPlugin(),
17+
],
18+
)
19+
20+
# Execute a workflow
21+
result = await client.execute_workflow(
22+
DeterministicWorkflow.run,
23+
"Write a science fiction story about time travel",
24+
id="deterministic-workflow-example",
25+
task_queue="openai-agents-patterns-task-queue",
26+
)
27+
print(f"Result: {result}")
28+
29+
30+
if __name__ == "__main__":
31+
asyncio.run(main())
Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
import asyncio
2+
3+
from temporalio.client import Client
4+
from temporalio.contrib.openai_agents import OpenAIAgentsPlugin
5+
6+
from openai_agents.agent_patterns.workflows.forcing_tool_use_workflow import (
7+
ForcingToolUseWorkflow,
8+
)
9+
10+
11+
async def main():
12+
# Create client connected to server at the given address
13+
client = await Client.connect(
14+
"localhost:7233",
15+
plugins=[
16+
OpenAIAgentsPlugin(),
17+
],
18+
)
19+
20+
# Execute workflows with different tool use behaviors
21+
print("Testing default behavior:")
22+
result1 = await client.execute_workflow(
23+
ForcingToolUseWorkflow.run,
24+
"default",
25+
id="forcing-tool-use-workflow-default",
26+
task_queue="openai-agents-patterns-task-queue",
27+
)
28+
print(f"Default result: {result1}")
29+
30+
print("\nTesting first_tool behavior:")
31+
result2 = await client.execute_workflow(
32+
ForcingToolUseWorkflow.run,
33+
"first_tool",
34+
id="forcing-tool-use-workflow-first-tool",
35+
task_queue="openai-agents-patterns-task-queue",
36+
)
37+
print(f"First tool result: {result2}")
38+
39+
print("\nTesting custom behavior:")
40+
result3 = await client.execute_workflow(
41+
ForcingToolUseWorkflow.run,
42+
"custom",
43+
id="forcing-tool-use-workflow-custom",
44+
task_queue="openai-agents-patterns-task-queue",
45+
)
46+
print(f"Custom result: {result3}")
47+
48+
49+
if __name__ == "__main__":
50+
asyncio.run(main())
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
import asyncio
2+
3+
from temporalio.client import Client
4+
from temporalio.contrib.openai_agents import OpenAIAgentsPlugin
5+
6+
from openai_agents.agent_patterns.workflows.input_guardrails_workflow import (
7+
InputGuardrailsWorkflow,
8+
)
9+
10+
11+
async def main():
12+
# Create client connected to server at the given address
13+
client = await Client.connect(
14+
"localhost:7233",
15+
plugins=[
16+
OpenAIAgentsPlugin(),
17+
],
18+
)
19+
20+
# Execute a workflow with a normal question (should pass)
21+
result1 = await client.execute_workflow(
22+
InputGuardrailsWorkflow.run,
23+
"What's the capital of California?",
24+
id="input-guardrails-workflow-normal",
25+
task_queue="openai-agents-patterns-task-queue",
26+
)
27+
print(f"Normal question result: {result1}")
28+
29+
# Execute a workflow with a math homework question (should be blocked)
30+
result2 = await client.execute_workflow(
31+
InputGuardrailsWorkflow.run,
32+
"Can you help me solve for x: 2x + 5 = 11?",
33+
id="input-guardrails-workflow-blocked",
34+
task_queue="openai-agents-patterns-task-queue",
35+
)
36+
print(f"Math homework result: {result2}")
37+
38+
39+
if __name__ == "__main__":
40+
asyncio.run(main())
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
import asyncio
2+
3+
from temporalio.client import Client
4+
from temporalio.contrib.openai_agents import OpenAIAgentsPlugin
5+
6+
from openai_agents.agent_patterns.workflows.llm_as_a_judge_workflow import (
7+
LLMAsAJudgeWorkflow,
8+
)
9+
10+
11+
async def main():
12+
# Create client connected to server at the given address
13+
client = await Client.connect(
14+
"localhost:7233",
15+
plugins=[
16+
OpenAIAgentsPlugin(),
17+
],
18+
)
19+
20+
# Execute a workflow
21+
result = await client.execute_workflow(
22+
LLMAsAJudgeWorkflow.run,
23+
"A thrilling adventure story about pirates searching for treasure",
24+
id="llm-as-a-judge-workflow-example",
25+
task_queue="openai-agents-patterns-task-queue",
26+
)
27+
print(f"Result: {result}")
28+
29+
30+
if __name__ == "__main__":
31+
asyncio.run(main())
Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
import asyncio
2+
3+
from temporalio.client import Client
4+
from temporalio.contrib.openai_agents import OpenAIAgentsPlugin
5+
6+
from openai_agents.agent_patterns.workflows.output_guardrails_workflow import (
7+
OutputGuardrailsWorkflow,
8+
)
9+
10+
11+
async def main():
12+
# Create client connected to server at the given address
13+
client = await Client.connect(
14+
"localhost:7233",
15+
plugins=[
16+
OpenAIAgentsPlugin(),
17+
],
18+
)
19+
20+
# Execute a workflow with a normal question (should pass)
21+
result1 = await client.execute_workflow(
22+
OutputGuardrailsWorkflow.run,
23+
"What's the capital of California?",
24+
id="output-guardrails-workflow-normal",
25+
task_queue="openai-agents-patterns-task-queue",
26+
)
27+
print(f"Normal question result: {result1}")
28+
29+
# Execute a workflow with input that might trigger sensitive data output
30+
result2 = await client.execute_workflow(
31+
OutputGuardrailsWorkflow.run,
32+
"My phone number is 650-123-4567. Where do you think I live?",
33+
id="output-guardrails-workflow-sensitive",
34+
task_queue="openai-agents-patterns-task-queue",
35+
)
36+
print(f"Sensitive data result: {result2}")
37+
38+
39+
if __name__ == "__main__":
40+
asyncio.run(main())
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
import asyncio
2+
3+
from temporalio.client import Client
4+
from temporalio.contrib.openai_agents import OpenAIAgentsPlugin
5+
6+
from openai_agents.agent_patterns.workflows.parallelization_workflow import (
7+
ParallelizationWorkflow,
8+
)
9+
10+
11+
async def main():
12+
# Create client connected to server at the given address
13+
client = await Client.connect(
14+
"localhost:7233",
15+
plugins=[
16+
OpenAIAgentsPlugin(),
17+
],
18+
)
19+
20+
# Execute a workflow
21+
result = await client.execute_workflow(
22+
ParallelizationWorkflow.run,
23+
"Hello, world! How are you today?",
24+
id="parallelization-workflow-example",
25+
task_queue="openai-agents-patterns-task-queue",
26+
)
27+
print(f"Result: {result}")
28+
29+
30+
if __name__ == "__main__":
31+
asyncio.run(main())
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
import asyncio
2+
3+
from temporalio.client import Client
4+
from temporalio.contrib.openai_agents import OpenAIAgentsPlugin
5+
6+
from openai_agents.agent_patterns.workflows.routing_workflow import RoutingWorkflow
7+
8+
9+
async def main():
10+
# Create client connected to server at the given address
11+
client = await Client.connect(
12+
"localhost:7233",
13+
plugins=[
14+
OpenAIAgentsPlugin(),
15+
],
16+
)
17+
18+
# Execute a workflow
19+
result = await client.execute_workflow(
20+
RoutingWorkflow.run,
21+
"Bonjour! Comment allez-vous aujourd'hui?",
22+
id="routing-workflow-example",
23+
task_queue="openai-agents-patterns-task-queue",
24+
)
25+
print(f"Result: {result}")
26+
27+
28+
if __name__ == "__main__":
29+
asyncio.run(main())

0 commit comments

Comments
 (0)