feat: Expose agent final response via REST API endpoint#2690
Merged
Conversation
Add GET /api/conversations/{id}/response endpoint that returns the
agent's final response text using the existing SDK utility
get_agent_final_response(). This allows clients (Slack, Discord,
WhatsApp, GitHub integrations) to retrieve the agent's conclusion
without having to search through events and reimplement the extraction
logic.
- Add AgentResponseResult model to models.py
- Add get_agent_final_response() method to EventService
- Add /response endpoint to conversation_router
- Add tests for the endpoint and service method
Closes #2689
Co-authored-by: openhands <openhands@all-hands.dev>
Contributor
Python API breakage checks — ✅ PASSEDResult: ✅ PASSED |
Contributor
REST API breakage checks (OpenAPI) — ✅ PASSEDResult: ✅ PASSED |
Contributor
Coverage Report •
|
|||||||||||||||||||||||||
xingyaoww
commented
Apr 3, 2026
openhands-agent-server/openhands/agent_server/conversation_router.py
Outdated
Show resolved
Hide resolved
Rename the REST API endpoint path to match the programmatic API naming:
- /{conversation_id}/response -> /{conversation_id}/agent_final_response
- Updated handler function name accordingly
- Updated test URL paths
Co-authored-by: openhands <openhands@all-hands.dev>
xingyaoww
commented
Apr 5, 2026
all-hands-bot
approved these changes
Apr 5, 2026
Collaborator
all-hands-bot
left a comment
There was a problem hiding this comment.
🟢 Good taste - Simple, focused endpoint that reuses SDK utilities and follows established patterns. Comprehensive tests validate real behavior (FinishAction, MessageEvent, edge cases) rather than just asserting mocks. No complexity issues, no breaking changes, no security concerns. LGTM! ✅
Add end-to-end test in test_remote_conversation_live_server.py that:
- Creates a conversation with a patched LLM returning finish(message=...)
- Runs the agent to completion
- Hits GET /api/conversations/{id}/agent_final_response
- Verifies the response text matches the finish message
- Checks 404 for unknown conversation IDs
Also update openhands-agent-server/AGENTS.md to document that small
endpoint additions should include a live server integration test.
Co-authored-by: openhands <openhands@all-hands.dev>
Collaborator
Author
|
@enyst would appreciate quick 👀 here |
8 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Adds a new REST API endpoint
GET /api/conversations/{conversation_id}/responsethat returns the agent's final response text for a conversation. This uses the existing SDK utilityget_agent_final_response()to extract the text from either aFinishActionmessage or the last agentMessageEvent.This eliminates the need for every client (Slack, Discord, WhatsApp, GitHub integration, etc.) to independently search through events via
/searchand reimplement the same extraction logic to find "what did the agent conclude?".Changes
models.py: AddedAgentResponseResultresponse model with aresponse: strfieldevent_service.py: Addedget_agent_final_response()/_get_agent_final_response_sync()methods that delegate to the SDK'sget_agent_final_response()utility, following the existing sync/async executor patternconversation_router.py: AddedGET /{conversation_id}/responseendpoint that returnsAgentResponseResulttest_conversation_response.py: 7 tests covering endpoint behavior (success, empty response, 404) and service-layer logic (FinishAction, MessageEvent, empty events, inactive service)Design
Follows Option A from the SDK comment: a dedicated endpoint that keeps
ConversationInfofocused on metadata/status. The endpoint returns a simple string response matching the agreed approach in the issue discussion.Fixes #2689
Checklist
This PR was created by an AI assistant (OpenHands) on behalf of @xingyaoww.
@xingyaoww can click here to continue refining the PR
Agent Server images for this PR
• GHCR package: https://github.com/OpenHands/agent-sdk/pkgs/container/agent-server
Variants & Base Images
eclipse-temurin:17-jdknikolaik/python-nodejs:python3.13-nodejs22-slimgolang:1.21-bookwormPull (multi-arch manifest)
# Each variant is a multi-arch manifest supporting both amd64 and arm64 docker pull ghcr.io/openhands/agent-server:d64d1d3-pythonRun
All tags pushed for this build
About Multi-Architecture Support
d64d1d3-python) is a multi-arch manifest supporting both amd64 and arm64d64d1d3-python-amd64) are also available if needed