forked from Azure/azure-functions-durable-python
-
Notifications
You must be signed in to change notification settings - Fork 1
Add a handoffs/message_filter sample, fix passing parameters to activity tools, fix ActivityModelInput serialization #16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
e777033
Match call_activity and call_activity_with_retry signatures and docst…
AnatoliB e7540ba
Add message_filter sample (handoff)
greenie-msft 89b21c9
Fix linter issues, remove temporary logging
AnatoliB 24fabbe
Merge branch 'durable-openai-agent' into anatolib/handoff-sample
AnatoliB File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,173 @@ | ||
from __future__ import annotations | ||
|
||
import json | ||
|
||
from agents import Agent, HandoffInputData, Runner, function_tool, handoff | ||
from agents.extensions import handoff_filters | ||
from agents.models import is_gpt_5_default | ||
|
||
|
||
def spanish_handoff_message_filter(handoff_message_data: HandoffInputData) -> HandoffInputData: | ||
if is_gpt_5_default(): | ||
print("gpt-5 is enabled, so we're not filtering the input history") | ||
# when using gpt-5, removing some of the items could break things, so we do this filtering only for other models | ||
return HandoffInputData( | ||
input_history=handoff_message_data.input_history, | ||
pre_handoff_items=tuple(handoff_message_data.pre_handoff_items), | ||
new_items=tuple(handoff_message_data.new_items), | ||
) | ||
|
||
# First, we'll remove any tool-related messages from the message history | ||
handoff_message_data = handoff_filters.remove_all_tools(handoff_message_data) | ||
|
||
# Second, we'll also remove the first two items from the history, just for demonstration | ||
history = ( | ||
tuple(handoff_message_data.input_history[2:]) | ||
if isinstance(handoff_message_data.input_history, tuple) | ||
else handoff_message_data.input_history | ||
) | ||
|
||
# or, you can use the HandoffInputData.clone(kwargs) method | ||
return HandoffInputData( | ||
input_history=history, | ||
pre_handoff_items=tuple(handoff_message_data.pre_handoff_items), | ||
new_items=tuple(handoff_message_data.new_items), | ||
) | ||
|
||
|
||
def main(random_number_tool): | ||
first_agent = Agent( | ||
name="Assistant", | ||
instructions="Be extremely concise.", | ||
tools=[random_number_tool], | ||
) | ||
|
||
spanish_agent = Agent( | ||
name="Spanish Assistant", | ||
instructions="You only speak Spanish and are extremely concise.", | ||
handoff_description="A Spanish-speaking assistant.", | ||
) | ||
|
||
second_agent = Agent( | ||
name="Assistant", | ||
instructions=( | ||
"Be a helpful assistant. If the user speaks Spanish, handoff to the Spanish assistant." | ||
), | ||
handoffs=[handoff(spanish_agent, input_filter=spanish_handoff_message_filter)], | ||
) | ||
|
||
# 1. Send a regular message to the first agent | ||
result = Runner.run_sync(first_agent, input="Hi, my name is Sora.") | ||
|
||
print("Step 1 done") | ||
|
||
# 2. Ask it to generate a number | ||
result = Runner.run_sync( | ||
first_agent, | ||
input=result.to_input_list() | ||
+ [{"content": "Can you generate a random number between 0 and 100?", "role": "user"}], | ||
) | ||
|
||
print("Step 2 done") | ||
|
||
# 3. Call the second agent | ||
result = Runner.run_sync( | ||
second_agent, | ||
input=result.to_input_list() | ||
+ [ | ||
{ | ||
"content": "I live in New York City. Whats the population of the city?", | ||
"role": "user", | ||
} | ||
], | ||
) | ||
|
||
print("Step 3 done") | ||
|
||
# 4. Cause a handoff to occur | ||
result = Runner.run_sync( | ||
second_agent, | ||
input=result.to_input_list() | ||
+ [ | ||
{ | ||
"content": "Por favor habla en español. ¿Cuál es mi nombre y dónde vivo?", | ||
"role": "user", | ||
} | ||
], | ||
) | ||
|
||
print("Step 4 done") | ||
|
||
print("\n===Final messages===\n") | ||
|
||
# 5. That should have caused spanish_handoff_message_filter to be called, which means the | ||
# output should be missing the first two messages, and have no tool calls. | ||
# Let's print the messages to see what happened | ||
for message in result.to_input_list(): | ||
print(json.dumps(message, indent=2)) | ||
# tool_calls = message.tool_calls if isinstance(message, AssistantMessage) else None | ||
|
||
# print(f"{message.role}: {message.content}\n - Tool calls: {tool_calls or 'None'}") | ||
""" | ||
$python examples/handoffs/message_filter.py | ||
Step 1 done | ||
Step 2 done | ||
Step 3 done | ||
Step 4 done | ||
|
||
===Final messages=== | ||
|
||
{ | ||
"content": "Can you generate a random number between 0 and 100?", | ||
"role": "user" | ||
} | ||
{ | ||
"id": "...", | ||
"content": [ | ||
{ | ||
"annotations": [], | ||
"text": "Sure! Here's a random number between 0 and 100: **42**.", | ||
"type": "output_text" | ||
} | ||
], | ||
"role": "assistant", | ||
"status": "completed", | ||
"type": "message" | ||
} | ||
{ | ||
"content": "I live in New York City. Whats the population of the city?", | ||
"role": "user" | ||
} | ||
{ | ||
"id": "...", | ||
"content": [ | ||
{ | ||
"annotations": [], | ||
"text": "As of the most recent estimates, the population of New York City is approximately 8.6 million people. However, this number is constantly changing due to various factors such as migration and birth rates. For the latest and most accurate information, it's always a good idea to check the official data from sources like the U.S. Census Bureau.", | ||
"type": "output_text" | ||
} | ||
], | ||
"role": "assistant", | ||
"status": "completed", | ||
"type": "message" | ||
} | ||
{ | ||
"content": "Por favor habla en espa\u00f1ol. \u00bfCu\u00e1l es mi nombre y d\u00f3nde vivo?", | ||
"role": "user" | ||
} | ||
{ | ||
"id": "...", | ||
"content": [ | ||
{ | ||
"annotations": [], | ||
"text": "No tengo acceso a esa informaci\u00f3n personal, solo s\u00e9 lo que me has contado: vives en Nueva York.", | ||
"type": "output_text" | ||
} | ||
], | ||
"role": "assistant", | ||
"status": "completed", | ||
"type": "message" | ||
} | ||
""" | ||
|
||
return result.final_output |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine, but I think there's another serializer function used elsewhere; I wonder if we can combine them at some point.
Also, I wonder if we should add a layer of indirection in the output of the activity rather than return OpenAI types directly, just in case we need to make versioning adjustments in the future--else we may be subject to breaking OpenAI changes. (I had a change in progress that did this.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, let's do it later.
Are you talking about a situation when activity output is saved in the orchestration history, then the user upgrades their app to a new OpenAI SDK version, and they expect this orchestration to continue after an upgrade? Yes, we should be able to eventually handle that, good catch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's what I was referring to. Though, thinking about it again, I suppose it doesn't really matter whether the OpenAI type is returned directly or wrapped in an outer type (e.g.
ActivityModelOutput
). We can still add logic to the post-activity-call handler to make any adjustments needed before passing it back to the OpenAI SDK layer.