Skip to content

Conversation

@KJ7LNW
Copy link
Contributor

@KJ7LNW KJ7LNW commented May 23, 2025

Context

The AI was sometimes asking users for information (like file paths) that it could have found itself using available tools like search_files or list_files. This created unnecessary back-and-forth with users and made the AI seem less capable than it actually is.

Implementation

This change adds explicit instructions to the ask_followup_question tool description to ensure the AI:

  1. FIRST uses available tools (like search_files, list_files) to locate needed information
  2. ONLY requests user-provided details when those tools cannot retrieve them
  3. PRIORITIZES its own knowledge and only asks the user when multiple options require human consideration

The changes were made to the ask-followup-question.ts file and the corresponding snapshot tests were updated.

Screenshots

N/A - This is a behavior change in the AI's prompting.

How to Test

  1. Ask the AI to find a file without specifying the exact path
  2. The AI should use search_files or list_files to locate the file rather than asking you for the path
  3. The AI should only ask follow-up questions when it genuinely cannot determine the information using available tools

Get in Touch

Discord: KJ7LNW

Fixes #3873


Important

Enhances AI behavior by ensuring it uses tools before asking users for information, with changes in ask-followup-question.ts and updated snapshot tests.

  • Behavior:
    • Updated getAskFollowupQuestionDescription() in ask-followup-question.ts to ensure AI uses tools like search_files, list_files before asking users for information.
    • AI now prioritizes its own knowledge and only asks users when tools cannot retrieve information or when human input is necessary.
  • Testing:
    • Updated snapshot tests in system.test.ts.snap to reflect new behavior.

This description was created by Ellipsis for ee84a5020e5979c2b7bd32d22db7aa2526d4ae04. You can customize this summary. It will automatically update as commits are pushed.

@KJ7LNW KJ7LNW requested review from cte and mrubens as code owners May 23, 2025 01:17
@hannesrudolph hannesrudolph moved this from New to PR [Pre Approval Review] in Roo Code Roadmap May 23, 2025
@hannesrudolph hannesrudolph moved this from PR [Needs Review] to TEMP in Roo Code Roadmap May 26, 2025
@daniel-lxs daniel-lxs moved this from TEMP to PR [Needs Review] in Roo Code Roadmap May 27, 2025
@daniel-lxs daniel-lxs moved this from PR [Needs Prelim Review] to PR [Needs Review] in Roo Code Roadmap Jun 5, 2025
@daniel-lxs daniel-lxs moved this from PR [Needs Review] to PR [Needs Prelim Review] in Roo Code Roadmap Jun 5, 2025
Copy link
Collaborator

@hannesrudolph hannesrudolph left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review: Fix - Ensure AI uses tools before asking user questions

Overview

This PR addresses a UX issue where the AI was asking users for information (like file paths) that it could have found itself using available tools. The fix adds explicit instructions to the ask_followup_question tool to prioritize using available tools before asking users for information.

✅ Strengths

Problem Identification:

  • Clearly addresses a real UX pain point where AI appears less capable than it actually is
  • Well-documented issue (#3873) that this PR fixes
  • Good understanding of the root cause: insufficient guidance in tool descriptions

Implementation Approach:

  • Minimal, targeted change that addresses the core issue
  • Uses system prompt modifications rather than code logic changes
  • Maintains backward compatibility while improving behavior

Documentation:

  • Clear, structured instructions with numbered priorities
  • Specific examples of tools to use (search_files, list_files)
  • Good balance between being directive and maintaining flexibility

🔍 Technical Implementation

Changes Made:

  1. ask-followup-question.ts: Added 3 key instructions to the tool description
  2. system.test.ts.snap: Updated all snapshot tests to reflect the new prompt

Instruction Hierarchy:

1. MUST FIRST use available tools to locate information
2. ONLY request user details when tools cannot retrieve them
3. PRIORITIZE AI's own knowledge, ask only when human consideration needed

✅ Code Quality Assessment

Prompt Engineering:

  • Well-structured with clear priority ordering
  • Uses emphatic language (YOU MUST, IMPORTANT) to ensure compliance
  • Provides concrete examples (eg, search_files, list_files)
  • Maintains original tool purpose while adding constraints

Test Coverage:

  • ✅ All snapshot tests updated consistently
  • ✅ Tests will catch future regressions in prompt changes
  • ✅ No functional tests needed as this is prompt-based behavior change

Maintainability:

  • Single source of truth for the tool description
  • Changes propagate automatically to all system prompt variants
  • Easy to modify or extend in the future

🔍 Behavioral Impact Analysis

Expected Improvements:

  • Reduced unnecessary user interactions
  • More autonomous AI behavior
  • Better utilization of available tools
  • Improved user perception of AI capability

Potential Considerations:

  • AI might now spend more time searching before asking questions (generally positive)
  • Could potentially delay legitimate questions that require user input
  • May need monitoring to ensure AI doesn't get stuck in tool loops

⚠️ Minor Observations

  1. Grammar/Style:

    • Point 2 has a trailing space: "them (eg, design specific considerations). "
    • Could be more concise while maintaining clarity
  2. Examples:

    • The (eg, design specific considerations) example could be more specific
    • Consider adding more concrete examples of when user input IS appropriate
  3. Tool Coverage:

    • Currently mentions search_files, list_files - could include other relevant tools
    • Might benefit from ... to indicate this is not an exhaustive list

📋 Testing Strategy

Manual Testing Recommended:

  1. ✅ Ask AI to find files without exact paths
  2. ✅ Request information that exists in the codebase
  3. ✅ Verify AI still asks legitimate questions when needed

Areas to Monitor:

  • Response time impact from additional tool usage
  • Quality of questions when AI does ask them
  • User satisfaction with reduced back-and-forth

Security & Performance

  • ✅ No security implications
  • ✅ Minimal performance impact (may increase tool usage, decrease user interactions)
  • ✅ No external dependencies or breaking changes

Overall Assessment

LGTM

This is an excellent improvement that addresses a real UX issue with a targeted, well-thought-out solution. The implementation:

  • Solves the core problem effectively
  • Uses proper prompt engineering techniques
  • Maintains all existing functionality
  • Has comprehensive test coverage
  • Follows good software engineering practices

Impact: This change should significantly improve user experience by making the AI more autonomous and reducing unnecessary interruptions.

Risk: Very low - this is a prompt-only change that adds constraints rather than removing functionality.

The minor style suggestions above are not blockers for approval. This is a solid contribution that will meaningfully improve the AI's behavior.

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Jun 6, 2025
@hannesrudolph hannesrudolph moved this from PR [Needs Prelim Review] to PR [Needs Review] in Roo Code Roadmap Jun 9, 2025
@hannesrudolph
Copy link
Collaborator

I am good with these changes.

@daniel-lxs daniel-lxs moved this from PR [Needs Review] to PR [Needs Prelim Review] in Roo Code Roadmap Jun 21, 2025
@daniel-lxs daniel-lxs moved this from PR [Needs Prelim Review] to PR [Needs Review] in Roo Code Roadmap Jun 21, 2025
@hannesrudolph hannesrudolph added the Conflict Cleanup Needed Awaiting final review. Minor merge conflicts. Maintainers to resolve. No contributor action needed. label Jun 21, 2025
Eric Wheeler and others added 2 commits June 21, 2025 13:27
This change adds explicit instructions to the ask_followup_question tool
to ensure the AI first attempts to use available tools like search_files
and list_files to locate information before asking the user questions.

Fixes #3873

Signed-off-by: Eric Wheeler <[email protected]>
@daniel-lxs daniel-lxs force-pushed the fix-ask-followup-question-tool-3873 branch from ee84a50 to 57f1090 Compare June 21, 2025 18:29
@daniel-lxs daniel-lxs requested a review from jr as a code owner June 21, 2025 18:29
@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Jun 21, 2025
@daniel-lxs daniel-lxs removed the Conflict Cleanup Needed Awaiting final review. Minor merge conflicts. Maintainers to resolve. No contributor action needed. label Jun 21, 2025
@daniel-lxs
Copy link
Member

@mrubens
Conflicts solved here.

@hannesrudolph You gave this PR your seal of approval, can you take a look again?

@hannesrudolph
Copy link
Collaborator

@KJ7LNW Closing this PR/issue as we're currently unconvinced that this addresses a clear issue due to insufficient data or linked examples demonstrating the problem. I understand how this might potentially be problematic in theory, but I haven't personally encountered it. If you can provide specific cloud links showing the issue clearly, please reopen, and we'll gladly reconsider.

@github-project-automation github-project-automation bot moved this from PR [Needs Review] to Done in Roo Code Roadmap Jun 30, 2025
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Jun 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lgtm This PR has been approved by a maintainer PR - Needs Review size:XS This PR changes 0-9 lines, ignoring generated files.

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

bug: AI asks questions instead of using available tools

3 participants