You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: openai_agents/agent_patterns/README.md
+60-31Lines changed: 60 additions & 31 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,6 +4,8 @@ Common agentic patterns extended with Temporal's durable execution capabilities.
4
4
5
5
*Adapted from [OpenAI Agents SDK agent patterns](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns)*
6
6
7
+
Before running these examples, be sure to review the [prerequisites and background on the integration](../README.md).
8
+
7
9
## Running the Examples
8
10
9
11
First, start the worker (supports all patterns):
@@ -13,56 +15,83 @@ uv run openai_agents/agent_patterns/run_worker.py
13
15
14
16
Then run individual examples in separate terminals:
15
17
16
-
## Deterministic Flows
17
-
18
-
**TODO**
19
-
20
-
A common tactic is to break down a task into a series of smaller steps. Each task can be performed by an agent, and the output of one agent is used as input to the next. For example, if your task was to generate a story, you could break it down into the following steps:
21
-
22
-
1. Generate an outline
23
-
2. Generate the story
24
-
3. Generate the ending
25
-
26
-
Each of these steps can be performed by an agent. The output of one agent is used as input to the next.
27
-
28
-
## Handoffs and Routing
29
-
30
-
**TODO**
31
-
32
-
In many situations, you have specialized sub-agents that handle specific tasks. You can use handoffs to route the task to the right agent.
18
+
### Deterministic Flows
19
+
Sequential agent execution with validation gates - demonstrates breaking complex tasks into smaller steps:
20
+
```bash
21
+
uv run openai_agents/agent_patterns/run_deterministic_workflow.py
22
+
```
33
23
34
-
For example, you might have a frontline agent that receives a request, and then hands off to a specialized agent based on the language of the request.
24
+
### Parallelization
25
+
Run multiple agents in parallel and select the best result - useful for improving quality or reducing latency:
26
+
```bash
27
+
uv run openai_agents/agent_patterns/run_parallelization_workflow.py
28
+
```
35
29
36
-
## Agents as Tools
30
+
### LLM-as-a-Judge
31
+
Iterative improvement using feedback loops - generate content, evaluate it, and improve until satisfied:
32
+
```bash
33
+
uv run openai_agents/agent_patterns/run_llm_as_a_judge_workflow.py
34
+
```
37
35
38
-
The mental model for handoffs is that the new agent "takes over". It sees the previous conversation history, and owns the conversation from that point onwards. However, this is not the only way to use agents. You can also use agents as a tool - the tool agent goes off and runs on its own, and then returns the result to the original agent.
36
+
### Agents as Tools
37
+
Use agents as callable tools within other agents - enables composition and specialized task delegation:
38
+
```bash
39
+
uv run openai_agents/agent_patterns/run_agents_as_tools_workflow.py
40
+
```
39
41
40
-
For example, you could model a translation task as tool calls instead: rather than handing over to the language-specific agent, you could call the agent as a tool, and then use the result in the next step. This enables things like translating multiple languages at once.
42
+
### Agent Routing and Handoffs
43
+
Route requests to specialized agents based on content analysis (adapted for non-streaming):
44
+
```bash
45
+
uv run openai_agents/agent_patterns/run_routing_workflow.py
46
+
```
41
47
48
+
### Input Guardrails
49
+
Pre-execution validation to prevent unwanted requests - demonstrates safety mechanisms:
42
50
```bash
43
-
uv run openai_agents/agent_patterns/run_agents_as_tools_workflow.py
51
+
uv run openai_agents/agent_patterns/run_input_guardrails_workflow.py
44
52
```
45
53
46
-
## LLM-as-a-Judge
54
+
### Output Guardrails
55
+
Post-execution validation to detect sensitive content - ensures safe responses:
56
+
```bash
57
+
uv run openai_agents/agent_patterns/run_output_guardrails_workflow.py
58
+
```
47
59
48
-
**TODO**
60
+
### Forcing Tool Use
61
+
Control tool execution strategies - choose between different approaches to tool usage:
62
+
```bash
63
+
uv run openai_agents/agent_patterns/run_forcing_tool_use_workflow.py
64
+
```
49
65
50
-
LLMs can often improve the quality of their output if given feedback. A common pattern is to generate a response using a model, and then use a second model to provide feedback. You can even use a small model for the initial generation and a larger model for the feedback, to optimize cost.
66
+
## Pattern Details
51
67
52
-
For example, you could use an LLM to generate an outline for a story, and then use a second LLM to evaluate the outline and provide feedback. You can then use the feedback to improve the outline, and repeat until the LLM is satisfied with the outline.
68
+
### Deterministic Flows
69
+
A common tactic is to break down a task into a series of smaller steps. Each task can be performed by an agent, and the output of one agent is used as input to the next. For example, if your task was to generate a story, you could break it down into the following steps:
53
70
54
-
## Parallelization
71
+
1. Generate an outline
72
+
2. Check outline quality and genre
73
+
3. Write the story (only if outline passes validation)
55
74
56
-
**TODO**
75
+
Each of these steps can be performed by an agent. The output of one agent is used as input to the next.
57
76
77
+
### Parallelization
58
78
Running multiple agents in parallel is a common pattern. This can be useful for both latency (e.g. if you have multiple steps that don't depend on each other) and also for other reasons e.g. generating multiple responses and picking the best one.
59
79
60
-
## Guardrails
80
+
### LLM-as-a-Judge
81
+
LLMs can often improve the quality of their output if given feedback. A common pattern is to generate a response using a model, and then use a second model to provide feedback. You can even use a small model for the initial generation and a larger model for the feedback, to optimize cost.
61
82
62
-
**TODO**
83
+
### Agents as Tools
84
+
The mental model for handoffs is that the new agent "takes over". It sees the previous conversation history, and owns the conversation from that point onwards. However, this is not the only way to use agents. You can also use agents as a tool - the tool agent goes off and runs on its own, and then returns the result to the original agent.
63
85
86
+
### Guardrails
64
87
Related to parallelization, you often want to run input guardrails to make sure the inputs to your agents are valid. For example, if you have a customer support agent, you might want to make sure that the user isn't trying to ask for help with a math problem.
65
88
66
89
You can definitely do this without any special Agents SDK features by using parallelization, but we support a special guardrail primitive. Guardrails can have a "tripwire" - if the tripwire is triggered, the agent execution will immediately stop and a `GuardrailTripwireTriggered` exception will be raised.
67
90
68
-
This is really useful for latency: for example, you might have a very fast model that runs the guardrail and a slow model that runs the actual agent. You wouldn't want to wait for the slow model to finish, so guardrails let you quickly reject invalid inputs.
91
+
This is really useful for latency: for example, you might have a very fast model that runs the guardrail and a slow model that runs the actual agent. You wouldn't want to wait for the slow model to finish, so guardrails let you quickly reject invalid inputs.
92
+
93
+
## Omitted Examples
94
+
95
+
The following patterns from the [reference repository](https://github.com/openai/openai-agents-python/tree/main/examples/agent_patterns) are not included in this Temporal adaptation:
96
+
97
+
-**Streaming Guardrails**: Requires streaming capabilities which are not yet available in the Temporal integration
0 commit comments