Skip to content

Conversation

AnatoliB
Copy link
Owner

@AnatoliB AnatoliB commented Sep 17, 2025

Ported handoffs/message_filter sample from OpenAI Agents SDK:

  • The tool function returning a random number had to be converted to an activity tool to preserve determinism
  • The tool activity definition moved to function_app.py

Fixed two issues exposed by this sample:

  • When a list is passed to the input parameter of Runner.run_sync, the ActivityModelInput object failed to serialize to JSON
  • Activity tool parameters were passed wrapped to a JSON like this: {"max":100} - fixed by parsing this and extracting the value properly

@AnatoliB AnatoliB marked this pull request as ready for review September 17, 2025 18:15
@AnatoliB AnatoliB changed the title Add a handsoff/message_filter sample, fix passing parameters to activity tools, fix ActivityModelInput serialization Add a handoffs/message_filter sample, fix passing parameters to activity tools, fix ActivityModelInput serialization Sep 17, 2025

json_obj = ModelResponse.__pydantic_serializer__.to_json(result)
return json_obj.decode()
# Use safe/public Pydantic API when possible. Prefer model_dump_json if result is a BaseModel
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine, but I think there's another serializer function used elsewhere; I wonder if we can combine them at some point.

Also, I wonder if we should add a layer of indirection in the output of the activity rather than return OpenAI types directly, just in case we need to make versioning adjustments in the future--else we may be subject to breaking OpenAI changes. (I had a change in progress that did this.)

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine, but I think there's another serializer function used elsewhere; I wonder if we can combine them at some point.

Sure, let's do it later.

Also, I wonder if we should add a layer of indirection in the output of the activity rather than return OpenAI types directly, just in case we need to make versioning adjustments in the future--else we may be subject to breaking OpenAI changes. (I had a change in progress that did this.)

Are you talking about a situation when activity output is saved in the orchestration history, then the user upgrades their app to a new OpenAI SDK version, and they expect this orchestration to continue after an upgrade? Yes, we should be able to eventually handle that, good catch.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's what I was referring to. Though, thinking about it again, I suppose it doesn't really matter whether the OpenAI type is returned directly or wrapped in an outer type (e.g. ActivityModelOutput). We can still add logic to the post-activity-call handler to make any adjustments needed before passing it back to the OpenAI SDK layer.

@AnatoliB AnatoliB merged commit 3235404 into durable-openai-agent Sep 18, 2025
4 checks passed
@AnatoliB AnatoliB deleted the anatolib/handoff-sample branch September 18, 2025 05:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants