Skip to content

Conversation

philliphoff
Copy link
Collaborator

Scaffolds a couple of initial Open AI orchestration tests, for a "hello world" orchestration and orchestration using an activity as a tool. They're a bit tedious to write, as there is a lot of serialized data shuffled to the activities and back; there's a lot of room for improvement but I'd like to start having some start to validation of functionality.

Also, adds the ability to swap model providers (which will be needed for testing the activity logic, which I don't do...yet).

opentelemetry-api==1.32.1
opentelemetry-sdk==1.32.1
opentelemetry-sdk==1.32.1
openai==1.98.0
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wanted to double-check: my understanding is that adding openai and openai-agents here will not mean that every app installing azure-functions-durable will automatically install these packages as well, correct?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's my understanding as well--that these are requirements just for the repo. As far as I know, the dependencies of the package are specified in setup.py. We do have to be careful, I believe, not to let imports of OpenAI dependencies leak into files that shouldn't necessarily require them. That is, to use "local" imports where necessary instead of imports at the top of the file.

context_builder, openai_agent_hello_world, uses_pystein=True)

expected_state = base_expected_state()
add_activity_action(expected_state, "{\"input\":[{\"content\":\"Tell me about recursion in programming.\",\"role\":\"user\"}],\"model_settings\":{\"temperature\":null,\"top_p\":null,\"frequency_penalty\":null,\"presence_penalty\":null,\"tool_choice\":null,\"parallel_tool_calls\":null,\"truncation\":null,\"max_tokens\":null,\"reasoning\":null,\"metadata\":null,\"store\":null,\"include_usage\":null,\"response_include\":null,\"extra_query\":null,\"extra_body\":null,\"extra_headers\":null,\"extra_args\":null},\"tracing\":0,\"model_name\":null,\"system_instructions\":\"You only respond in haikus.\",\"tools\":[],\"output_schema\":null,\"handoffs\":[],\"previous_response_id\":null,\"prompt\":null}")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not necessarily right now, but I'm wondering if we could implement a builder utility to not have to deal with this raw json...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, absolutely, these tests are tedious to create right now. I just didn't want to create too many helpers up front until I see what's needed as a few more tests are added.

@philliphoff philliphoff merged commit 9580998 into durable-openai-agent Sep 5, 2025
4 checks passed
@philliphoff philliphoff deleted the philliphoff-add-initial-tests branch September 5, 2025 19:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants