-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Description
What specific problem does this solve?
Who is affected:
Anyone who connect to databases and comes across queries that returns too much data (> context size limit).
When does this happen: See above.
What's the current bheavior:
The entire MCP response gets embedded into the context I think? And the whole AI iteration process grinds to a half. The model struggles to even condense the context (with the entire MCP response embedded).
What's the impact:
- AI stops working
- The user is unable to process anything that returns too much data.
Lastly, most of the time, it's relatively unproductive to get LLM to consume large data set and trying to make it return the correct result. I find more success in have the data saved as a file and having AI write some scripts or use other CLI / bash commands / scripts to work with the data.
Additional context (optional)
No response
Roo Code Task Links (Optional)
No response
Request checklist
- I've searched existing Issues and Discussions for duplicates
- This describes a specific problem with clear impact and context
Interested in implementing this?
- Yes, I'd like to help implement this feature
Implementation requirements
- I understand this needs approval before implementation begins
How should this be solved? (REQUIRED if contributing, optional otherwise)
No response
How will we know it works? (Acceptance Criteria - REQUIRED if contributing, optional otherwise)
No response
Technical considerations (REQUIRED if contributing, optional otherwise)
No response
Trade-offs and risks (REQUIRED if contributing, optional otherwise)
No response
Metadata
Metadata
Assignees
Labels
Type
Projects
Status