-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Description
What specific problem does this solve?
What specific problem does this solve?
Currently, the todo_tool can only be invoked by the LLM in isolation within a single response.
This creates two main issues:
-
Workflow inefficiency – If a user wants to update a todo list and perform another action (e.g., run a search, modify code, or call another tool) in the same LLM response, they must split it into multiple requests. This increases friction and slows down task completion.
-
Inefficient and noisy UI feedback – Every modification currently reprints the entire todo list in the chat output.
- This is redundant because the list is already visible in the extension view.
- It clutters the conversation, making it harder to spot the actual change.
- It increases token usage unnecessarily, which in turn increases cost for users on metered plans.
Who is affected:
- All users who rely on the
todo_toolfor task tracking inside Roo Code. - Power users who chain multiple tool actions in a single LLM response.
- Users conscious of token usage and cost.
When it happens:
- Any time a user tries to combine a todo action with another tool call in one LLM response.
- Any time a todo action is performed, triggering a full list reprint in chat.
Impact:
- Extra round trips for multi-step workflows.
- Higher token usage and cost.
- Reduced clarity in chat output.
Additional context (optional)
No response
Roo Code Task Links (Optional)
No response
Request checklist
- I've searched existing Issues and Discussions for duplicates
- This describes a specific problem with clear impact and context
Interested in implementing this?
- Yes, I'd like to help implement this feature
Implementation requirements
- I understand this needs approval before implementation begins
How should this be solved? (REQUIRED if contributing, optional otherwise)
Proposed solution
1. Multi-tool execution support
- Allow the LLM to call
todo_toolalongside other tools in the same response. - Example: In one LLM response, create a todo list and run a code search, without requiring two separate requests.
- This could be implemented by enabling parallel or sequential tool calls within the same execution context.
2. Collapsible diff-style UI feedback for todo actions
- Replace the current “full list dump” in chat with a short confirmation row in the extension view:
- ✅ "Todo list created"
- ✏️ "Todo list updated"
- ✔️ "Todo list completed"
- 🗑️ "Todo list deleted" (if applicable)
- Each confirmation row has a ▶ expand arrow (collapsed by default).
- When expanded, show a diff view of the change:
+ Added: "Write unit tests for API" - Removed: "Fix login button alignment" - Allow the diff section to be recollapsed to keep the UI tidy.
- Keep the full list visible in the extension’s dedicated todo panel, but avoid duplicating it in chat unless explicitly requested.
- This reduces token usage, lowers cost, and makes changes instantly scannable without clutter.
How will we know it works? (Acceptance Criteria - REQUIRED if contributing, optional otherwise)
Given a user issues a request that involves both a todo action and another tool action
When the LLM processes the request
Then both actions should execute in the same LLM response without requiring a second request
And the UI should display a short confirmation row with an expand arrow for the diff view
And the diff view should be collapsible again after expansion
But the full list should still be accessible in the extension’s todo panel
Technical considerations (REQUIRED if contributing, optional otherwise)
No response
Trade-offs and risks (REQUIRED if contributing, optional otherwise)
- Multi-tool execution could increase complexity in error handling — need to ensure one tool’s failure doesn’t block the other.
- Diff view must be clear and readable even for large changes.
- Expand/collapse interaction should be smooth and not cause layout jumps.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status