Skip to content

Conversation

@roomote
Copy link
Contributor

@roomote roomote bot commented Aug 30, 2025

Summary

This PR adds a "compact prompt mode" option for local LLM providers (LM Studio and Ollama) to address timeout issues caused by large prompts that take too long to process at slower token generation speeds (7-10 tok/sec).

Changes

  • ✅ Added compactPromptMode boolean setting to the ProviderSettings type
  • ✅ Implemented compact prompt generation in the SYSTEM_PROMPT function that creates a minimal prompt with only essential sections
  • ✅ Added UI toggle in LM Studio and Ollama provider settings pages
  • ✅ Created reusable CompactPromptControl component for the settings UI
  • ✅ Added translation strings for the new feature
  • ✅ Included comprehensive tests for compact prompt functionality

How it works

When enabled, the compact prompt mode:

  • Reduces the system prompt to only essential sections (role definition, tool descriptions, basic rules)
  • Excludes non-essential sections like MCP, browser tools, capabilities, modes, and verbose instructions
  • Maintains custom instructions and language preferences
  • Significantly reduces the token count, allowing local LLMs to process prompts faster

Testing

  • All existing tests pass
  • Added new test suite for compact prompt mode functionality
  • Verified that compact prompts are significantly shorter than normal prompts
  • Ensured the feature is optional and backward compatible

Screenshots

The new toggle appears in the LM Studio and Ollama settings:

  • Label: "Compact Prompt Mode"
  • Description: "Reduces the system prompt size for faster response times with local LLMs. This removes non-essential sections while keeping core functionality."
  • Note: "Recommended for [Provider] to prevent timeouts with slower local models"

Important

Adds compact prompt mode for local LLMs to reduce prompt size and prevent timeouts, with UI toggle and tests.

  • Behavior:
    • Adds compactPromptMode boolean to ProviderSettings in provider-settings.ts.
    • Implements compact prompt generation in SYSTEM_PROMPT in system.ts, reducing prompts to essential sections.
    • Adds UI toggle for compact prompt mode in LMStudio.tsx and Ollama.tsx.
    • Creates CompactPromptControl component for settings UI.
  • Testing:
    • Adds tests for compact prompt mode in system-prompt.spec.ts.
    • Verifies compact prompts are shorter and maintain essential sections.
  • Translations:
    • Adds translation strings for compact prompt mode in settings.json.

This description was created by Ellipsis for 576864b. You can customize this summary. It will automatically update as commits are pushed.

- Add compactPromptMode boolean setting to ProviderSettings type
- Implement compact prompt generation in SYSTEM_PROMPT function
- Add UI toggle in LM Studio and Ollama provider settings
- Create reusable CompactPromptControl component
- Add translation strings for the new feature
- Include comprehensive tests for compact prompt functionality

This feature addresses issue #7550 by providing a minimal prompt option
that reduces context size and improves response times for local LLMs
that have slower token generation speeds.
@roomote roomote bot requested review from cte, jr and mrubens as code owners August 30, 2025 06:50
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. enhancement New feature or request UI/UX UI/UX related or focused labels Aug 30, 2025
Copy link
Contributor Author

@roomote roomote bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reviewed my own code and found it acceptable, which is suspicious since I usually hate everything I write.

"compactPrompt": {
"title": "Compact Prompt Mode",
"description": "Reduces the system prompt size for faster response times with local LLMs. This removes non-essential sections while keeping core functionality.",
"providerNote": "Recommended for {{provider}} to prevent timeouts with slower local models"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I notice there are duplicate translation keys for compact prompt mode - lines 371-376 for LM Studio and lines 384-387 as general keys. Since the CompactPromptControl component already handles provider-specific messages dynamically, could we consolidate these to avoid duplication?

For example, we could keep just the general keys (384-387) and remove the LM Studio specific ones (371-376), as the component already prefixes with the provider name.

settings?: SystemPromptSettings,
todoList?: TodoItem[],
modelId?: string,
compactPromptMode?: boolean,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding JSDoc comments here to explain when and why to use compact prompt mode:

Suggested change
compactPromptMode?: boolean,
/**
* @param compactPromptMode - When true, generates a minimal prompt with only essential sections
* to reduce token count for local LLMs with slower processing speeds.
* Excludes MCP, browser tools, capabilities, modes, and verbose instructions.
*/
export const SYSTEM_PROMPT = async (

interface CompactPromptControlProps {
compactPromptMode?: boolean
onChange: (value: boolean) => void
providerName?: string
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For better type safety, consider using a union type for providerName:

Suggested change
providerName?: string
interface CompactPromptControlProps {
compactPromptMode?: boolean
onChange: (value: boolean) => void
providerName?: 'LM Studio' | 'Ollama'
}


// Check if we should use compact prompt mode for local LLM providers
const isLocalLLMProvider =
this.apiConfiguration.apiProvider === "lmstudio" || this.apiConfiguration.apiProvider === "ollama"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The compact prompt check happens on every getSystemPrompt() call. Since the provider type doesn't change during a task, could this be cached or memoized for better performance?

You could store the result of this check as a class property when the task is initialized.

return (
<div className="flex flex-col gap-2">
<div className="flex items-center justify-between">
<label htmlFor="compact-prompt-mode" className="font-medium">
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding a tooltip or help icon to explain the trade-offs of enabling compact mode. Users should understand what features are excluded (MCP, browser tools, modes, etc.) when they enable this option.

You could add a small info icon next to the label that shows this information on hover.

@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Aug 30, 2025
@daniel-lxs daniel-lxs closed this Sep 3, 2025
@github-project-automation github-project-automation bot moved this from Triage to Done in Roo Code Roadmap Sep 3, 2025
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Sep 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. size:L This PR changes 100-499 lines, ignoring generated files. UI/UX UI/UX related or focused

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

4 participants