Problem
Title generation currently falls back to self.agent.llm when no explicit LLM is passed:
# local_conversation.py:918-919
llm_to_use = llm or self.agent.llm
This couples title generation to the agent's LLM, but title generation is fundamentally a conversation-level concern, not an agent-level one. The agent's LLM may not always be suitable or available for this lightweight auxiliary task — and in some configurations it may not be callable at all, causing a silent fallback to simple message truncation.
The infrastructure to fix this already exists: generate_title() accepts an llm parameter at every layer of the call chain (ConversationService → EventService → LocalConversation → title_utils). But nobody passes one — AutoTitleSubscriber calls self.service.generate_title() with no LLM argument, so it always falls back to agent.llm.
Current call chain
AutoTitleSubscriber.__call__() # conversation_service.py:762
→ EventService.generate_title() # event_service.py:615 — llm=None
→ LocalConversation.generate_title() # local_conversation.py:904 — llm=None
→ llm_to_use = llm or self.agent.llm # falls back to agent LLM
→ generate_conversation_title() # title_utils.py
→ generate_title_with_llm() # title_utils.py:59 — actual completion() call
Proposal
Allow configuring a dedicated LLM profile for title generation, loaded from LLMProfileStore (~/.openhands/profiles/). This is consistent with how FallbackStrategy already uses profiles.
Concretely:
- Add a
title_llm_profile: str | None config option at the conversation service or app level
AutoTitleSubscriber (or EventService) loads the LLM from the profile store and passes it via the existing llm= parameter
- If no profile is configured, the current behavior is preserved (fall back to
agent.llm)
This keeps title generation decoupled from the agent, uses existing infrastructure (profiles + the llm parameter that's already threaded through), and makes it easy to use a cheap/fast model (e.g., Haiku) for title generation regardless of what model the agent itself uses.
🤖 Generated with Claude Code
Problem
Title generation currently falls back to
self.agent.llmwhen no explicit LLM is passed:This couples title generation to the agent's LLM, but title generation is fundamentally a conversation-level concern, not an agent-level one. The agent's LLM may not always be suitable or available for this lightweight auxiliary task — and in some configurations it may not be callable at all, causing a silent fallback to simple message truncation.
The infrastructure to fix this already exists:
generate_title()accepts anllmparameter at every layer of the call chain (ConversationService → EventService → LocalConversation → title_utils). But nobody passes one —AutoTitleSubscribercallsself.service.generate_title()with no LLM argument, so it always falls back toagent.llm.Current call chain
Proposal
Allow configuring a dedicated LLM profile for title generation, loaded from
LLMProfileStore(~/.openhands/profiles/). This is consistent with howFallbackStrategyalready uses profiles.Concretely:
title_llm_profile: str | Noneconfig option at the conversation service or app levelAutoTitleSubscriber(orEventService) loads the LLM from the profile store and passes it via the existingllm=parameteragent.llm)This keeps title generation decoupled from the agent, uses existing infrastructure (profiles + the
llmparameter that's already threaded through), and makes it easy to use a cheap/fast model (e.g., Haiku) for title generation regardless of what model the agent itself uses.🤖 Generated with Claude Code