Skip to content

Conversation

notgitika
Copy link
Contributor

@notgitika notgitika commented Oct 3, 2025

Description

This PR implements OpenAI Responses API as a separate model provider supporting streaming, structured output and tool calling.

Note: this commit does not include the extra capabilities that the Responses API supports such as the built-in tools and the stateful conversation runs.

Related Issues

#253

Documentation PR

TODO; coming next: will be adding a section for the Responses API within the existing openai model page

Type of Change

New feature

Testing

Added a unit test file similar to the existing test_openai.py model provider. Reusing the integ tests in the same file test_model_openai.py using pytest.parameterize with OpenAIResponses model so that I could test that the funcitonality is the same between the two models.

I ran everything in the CONTRIBUTING.md file.

hatch run integ-test
================== 81 passed, 68 skipped, 49 warnings in 106.56s (0:01:46) ==========

hatch run test-integ tests_integ/models/test_model_openai.py -v

============= 18 passed, 2 skipped in 13.44s =============

pre-commit run --all-files

  • I ran hatch run prepare --> yes, all tests pass

Checklist

  • I have read the CONTRIBUTING document
  • I have added any necessary tests that prove my fix is effective or my feature works
  • I have updated the documentation accordingly
  • I have added an appropriate example to the documentation to outline the feature, or no new docs are needed
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

logger.debug("config=<%s> | initializing", self.config)

@override
def update_config(self, **model_config: Unpack[OpenAIResponsesConfig]) -> None: # type: ignore[override]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wondering why we need it here: # type: ignore[override]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good question! the base class accepts **model_config: Any but here we use the more specific Unpack[OpenAIResponsesConfig] type which raises the mypy error.

openai.py also has this: https://github.com/strands-agents/sdk-python/blob/main/src/strands/models/openai.py#L73

}

@classmethod
def _format_request_tool_message(cls, tool_result: ToolResult) -> dict[str, Any]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we not handle images in tool results? Seems like these get dropped?

self,
messages: Messages,
tool_specs: Optional[list[ToolSpec]] = None,
system_prompt: Optional[str] = None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the concept of tool_choice similar to how we have it in the existing provider?

if message.get("content") or message.get("type") in ["function_call", "function_call_output"]
]

def _format_request(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the ordering here is strange to me, it doesn't read like a story as much. I'd expect this to be something like the following top to bottom. Introducing a private method before it is used I'm not a fan of

stream
_format_request
_format_request_message_content
_format_request_message_tool_call
_format_request_tool_message

"""
match event["chunk_type"]:
case "message_start":
return {"messageStart": {"role": "assistant"}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reach out to @zastrowm , but I believe we can make this more readable by now returning the typed StreamEvents rather than the dictionaries


for message in messages:
role = message["role"]
if role == "system":
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you explain and add a comment why we are skipping these? And then higher up can in the description can you add a comment briefly describing opeai responses and how it differs fundamentally from the existing stateless models

async with openai.AsyncOpenAI(**self.client_args) as client:
try:
response = await client.responses.create(**request)
except openai.APIError as e:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in some places we are catching RateLimitError and others it looks like we are just looking at the code. Same for badrequest+context overflow

can you explain why we have different error handling and unify if possibly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants