Context
PR #2192 added LocalConversation.switch_profile(profile_name: str) to the SDK (openhands-sdk/openhands/sdk/conversation/impl/local_conversation.py:498), allowing mid-conversation LLM switching via named profiles stored by LLMProfileStore. However, the agent-server does not expose this capability — there is no REST endpoint for API clients to trigger a model switch.
Current state
- SDK layer:
switch_profile() is fully implemented. It looks up a profile in LLMProfileStore (disk-backed, ~/.openhands/profiles/), caches it in LLMRegistry, and atomically updates agent.llm + state.agent under a lock.
- Agent-server layer:
EventService already holds the LocalConversation instance and exposes it via get_conversation() (line 74). The router already follows the pattern event_service = await conversation_service.get_event_service(id) for other operations (secrets, confirmation policy, security analyzer). But no endpoint calls switch_profile().
Proposed implementation
Add a new POST /{conversation_id}/switch_profile endpoint in conversation_router.py, following the same pattern used by the existing secrets, confirmation_policy, and security_analyzer endpoints:
@conversation_router.post(
"/{conversation_id}/switch_profile",
responses={404: {"description": "Conversation not found"}},
)
async def switch_conversation_profile(
conversation_id: UUID,
profile_name: str = Body(..., embed=True),
conversation_service: ConversationService = Depends(get_conversation_service),
) -> Success:
"""Switch the conversation's agent LLM to a named profile."""
event_service = await conversation_service.get_event_service(conversation_id)
if event_service is None:
raise HTTPException(status.HTTP_404_NOT_FOUND)
conversation = event_service.get_conversation()
conversation.switch_profile(profile_name)
return Success()
Why this approach
- Consistent with existing patterns: mirrors the shape of
POST /{id}/secrets, POST /{id}/confirmation_policy, etc.
- Separation of concerns: model switching is an explicit action, not a side-effect of sending a message.
- Minimal changes: ~10 lines in the router, no new models or service-layer changes needed. The SDK already does all the heavy lifting.
- Thread-safe:
switch_profile() uses the state lock internally.
Files to modify
| File |
Change |
openhands-agent-server/openhands/agent_server/conversation_router.py |
Add POST /{id}/switch_profile endpoint |
Related
Context
PR #2192 added
LocalConversation.switch_profile(profile_name: str)to the SDK (openhands-sdk/openhands/sdk/conversation/impl/local_conversation.py:498), allowing mid-conversation LLM switching via named profiles stored byLLMProfileStore. However, the agent-server does not expose this capability — there is no REST endpoint for API clients to trigger a model switch.Current state
switch_profile()is fully implemented. It looks up a profile inLLMProfileStore(disk-backed,~/.openhands/profiles/), caches it inLLMRegistry, and atomically updatesagent.llm+state.agentunder a lock.EventServicealready holds theLocalConversationinstance and exposes it viaget_conversation()(line 74). The router already follows the patternevent_service = await conversation_service.get_event_service(id)for other operations (secrets, confirmation policy, security analyzer). But no endpoint callsswitch_profile().Proposed implementation
Add a new
POST /{conversation_id}/switch_profileendpoint inconversation_router.py, following the same pattern used by the existingsecrets,confirmation_policy, andsecurity_analyzerendpoints:Why this approach
POST /{id}/secrets,POST /{id}/confirmation_policy, etc.switch_profile()uses the state lock internally.Files to modify
openhands-agent-server/openhands/agent_server/conversation_router.pyPOST /{id}/switch_profileendpointRelated
switch_profileimplementation