Skip to content

Add OpenResponses API client implementation#1342

Open
simonw wants to merge 4 commits intomainfrom
claude/openresponses-http-client-aEJR4
Open

Add OpenResponses API client implementation#1342
simonw wants to merge 4 commits intomainfrom
claude/openresponses-http-client-aEJR4

Conversation

@simonw
Copy link
Owner

@simonw simonw commented Jan 15, 2026

This adds llm/responses.py with sync and async model classes that
implement the OpenResponses API using httpx as the transport layer:

  • ResponsesModel (sync) and AsyncResponsesModel (async) extending
    KeyModel/AsyncKeyModel for integration with the llm framework
  • Pydantic models for ResponseResource and streaming events
  • SSE streaming parser for real-time text deltas
  • Custom error classes (ResponsesAPIError, ResponsesAuthenticationError,
    ResponsesRateLimitError, ResponsesInvalidRequestError)
  • Support for tool/function calls
  • Comprehensive test suite (35 tests) using TDD approach

This adds llm/responses.py with sync and async model classes that
implement the OpenResponses API using httpx as the transport layer:

- ResponsesModel (sync) and AsyncResponsesModel (async) extending
  KeyModel/AsyncKeyModel for integration with the llm framework
- Pydantic models for ResponseResource and streaming events
- SSE streaming parser for real-time text deltas
- Custom error classes (ResponsesAPIError, ResponsesAuthenticationError,
  ResponsesRateLimitError, ResponsesInvalidRequestError)
- Support for tool/function calls
- Comprehensive test suite (35 tests) using TDD approach
- Add needs_key, key_env_var, Options to model classes to fix
  mypy attribute conflicts
- Use ToolCall dataclass correctly for add_tool_call
- Add key validation to raise error if no API key provided
- Remove banner-style section headings
- Clean up outdated TDD comments in tests
@simonw
Copy link
Owner Author

simonw commented Jan 16, 2026

I manually tested it with this:

import llm
from llm.responses import ResponsesModel
from llm.default_plugins.openai_models import Chat
from llm.tools import llm_time


key = llm.get_key("openai")

chat_model = Chat(
    "gpt-5-mini",
    vision=True,
    reasoning=True,
    supports_schema=True,
    supports_tools=True,
)
print("=== chat model ===")
print(chat_model.chain("what is the time?", key=key, tools=[llm_time]).text())

print("=== responses model ===")

responses_model = ResponsesModel(
    "gpt-5-mini", "https://api.openai.com/v1", supports_schema=True, supports_tools=True
)
print(responses_model.chain("what is the time?", key=key, tools=[llm_time]).text())

It didn't work at first because tool results were not passed correctly, but I fixed that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants