Skip to content

jonathanfan-ee/interactive-feedback-mcp

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

13 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿ—ฃ๏ธ Interactive Feedback MCP

Simple MCP Server to enable a human-in-the-loop workflow in AI-assisted development tools like Cursor, Cline and Windsurf. This server allows you to easily provide feedback directly to the AI agent, bridging the gap between AI and you.

Note: This server is designed to run locally alongside the MCP client (e.g., Claude Desktop, Cursor), as it needs direct access to the user's operating system to display the feedback interface.

โœจ Features

  • ๐Ÿ’ฌ Interactive Feedback: Ask clarifying questions and get user responses
  • ๐ŸŒ Web Interface: Modern, responsive web-based interface with Markdown support
  • ๐Ÿ–ผ๏ธ GUI Interface: Native desktop application (when available)
  • ๐Ÿ“ฑ Multi-platform: Works on Windows, macOS, Linux (including ARM64)
  • ๐ŸŽฏ Predefined Options: Support for multiple-choice questions with custom options
  • โŒจ๏ธ Keyboard Shortcuts: Quick submission with Ctrl+Enter

๐Ÿ’ก Why Use This?

In environments like Cursor, every prompt you send to the LLM is treated as a distinct request โ€” and each one counts against your monthly limit (e.g. 500 premium requests). This becomes inefficient when you're iterating on vague instructions or correcting misunderstood output, as each follow-up clarification triggers a full new request.

This MCP server introduces a workaround: it allows the model to pause and request clarification before finalizing the response. Instead of completing the request, the model triggers a tool call (interactive_feedback) that opens an interactive feedback window. You can then provide more detail or ask for changes โ€” and the model continues the session, all within a single request.

Under the hood, it's just a clever use of tool calls to defer the completion of the request. Since tool calls don't count as separate premium interactions, you can loop through multiple feedback cycles without consuming additional requests.

Essentially, this helps your AI assistant ask for clarification instead of guessing, without wasting another request. That means fewer wrong answers, better performance, and less wasted API usage.

  • ๐Ÿ’ฐ Reduced Premium API Calls: Avoid wasting expensive API calls generating code based on guesswork.
  • โœ… Fewer Errors: Clarification before action means less incorrect code and wasted time.
  • โฑ๏ธ Faster Cycles: Quick confirmations beat debugging wrong guesses.
  • ๐ŸŽฎ Better Collaboration: Turns one-way instructions into a dialogue, keeping you in control.

๐Ÿ› ๏ธ Tools

This server exposes the following tool via the Model Context Protocol (MCP):

  • interactive_feedback: Asks the user a question and returns their answer. Can display predefined options for quick selection.

Example Usage

# Simple question
interactive_feedback(
    message="Do you approve this code change?"
)

# Multiple choice question
interactive_feedback(
    message="Which approach should we use?",
    predefined_options=["Option A", "Option B", "Option C"]
)

# Markdown-formatted question
interactive_feedback(
    message="""## Code Review

Please review the following changes:

- Added error handling
- Improved performance
- Updated documentation

**Do you want to proceed?**"""
)

๐Ÿ“ฆ Installation

  1. Prerequisites:

    • Python 3.11 or newer
    • uv (Python package manager). Install it with:
      • Windows: pip install uv
      • Linux: curl -LsSf https://astral.sh/uv/install.sh | sh
      • macOS: brew install uv
  2. Get the code:

    • Clone this repository:
      git clone https://github.com/pauoliva/interactive-feedback-mcp.git
      cd interactive-feedback-mcp
    • Or download the source code.
  3. Install dependencies:

    uv sync

๐Ÿ–ฅ๏ธ Environment Support

This MCP server automatically adapts to your environment and supports multiple interface modes:

  • ๐ŸŒ Web Mode: Modern web-based interface with full Markdown support, beautiful styling, and responsive design (recommended)
  • ๐Ÿ–ผ๏ธ GUI Mode: Native desktop application using PySide6 (when available)

Interface Selection

The server automatically chooses the best interface, but you can override this with the INTERACTIVE_FEEDBACK_UI environment variable:

# Force web interface (recommended)
export INTERACTIVE_FEEDBACK_UI=web

# Force GUI interface
export INTERACTIVE_FEEDBACK_UI=gui

# Auto-detect (default, prioritizes web)
export INTERACTIVE_FEEDBACK_UI=auto

Recommendation: Use web mode for the best experience - it provides beautiful Markdown rendering, responsive design, and works consistently across all platforms.

Platform Compatibility

  • โœ… Windows: Full Web and GUI support
  • โœ… macOS: Full Web and GUI support
  • โœ… Linux x86_64: Full Web and GUI support
  • โœ… Linux ARM64 (Raspberry Pi, etc.): Web support, GUI support with compatible PySide6 version (6.6.x+)

Note for ARM64 Linux users: If you encounter PySide6 compatibility issues, the server will automatically fall back to Web mode. For GUI support on ARM64, ensure you have PySide6 6.6.0 or newer.

โš™๏ธ Configuration

For Cursor IDE

Add the following configuration to your Cursor MCP settings:

{
  "mcpServers": {
    "interactive-feedback": {
      "command": "uv",
      "args": [
        "--directory",
        "/path/to/interactive-feedback-mcp",
        "run",
        "server.py"
      ]
    }
  }
}

For Claude Desktop

Add the following to your claude_desktop_config.json:

{
  "mcpServers": {
    "interactive-feedback": {
      "command": "uv",
      "args": [
        "--directory",
        "/path/to/interactive-feedback-mcp",
        "run",
        "server.py"
      ],
      "timeout": 600,
      "autoApprove": [
        "interactive_feedback"
      ]
    }
  }
}

Remember to change the /path/to/interactive-feedback-mcp path to the actual path where you cloned the repository on your system.

Recommended Rules

Add the following to your AI assistant's custom rules (in Cursor Settings > Rules > User Rules):

If requirements or instructions are unclear use the tool interactive_feedback to ask clarifying questions to the user before proceeding, do not make assumptions. Whenever possible, present the user with predefined options through the interactive_feedback MCP tool to facilitate quick decisions.

Whenever you're about to complete a user request, call the interactive_feedback tool to request user feedback before ending the process. If the feedback is empty you can end the request and don't call the tool in loop.

This will ensure your AI assistant always uses this MCP server to request user feedback when the prompt is unclear and before marking the task as completed.

๐Ÿงช Testing

Test the MCP server functionality:

uv run python test_mcp_server.py

Test the web interface directly:

uv run python feedback_web.py --prompt "Test question" --output-file result.json

๐Ÿ“ Project Structure

interactive-feedback-mcp/
โ”œโ”€โ”€ server.py              # Main MCP server
โ”œโ”€โ”€ feedback_web.py        # Web interface implementation
โ”œโ”€โ”€ feedback_ui.py         # GUI interface implementation
โ”œโ”€โ”€ test_mcp_server.py     # MCP protocol tests
โ”œโ”€โ”€ DEVELOPMENT_NOTES.md   # Development experience summary
โ”œโ”€โ”€ pyproject.toml         # Project configuration
โ””โ”€โ”€ README.md             # This file

๐Ÿ”’ Security

  • All user interfaces run locally
  • No data is transmitted to external servers
  • Temporary files are automatically cleaned up
  • User approval required for all feedback requests

๐Ÿ› ๏ธ Development

Requirements

  • Python 3.10+
  • FastMCP 2.0+
  • Optional: PySide6 (for GUI interface)

Running in Development

# Install with development dependencies
uv sync --extra gui

# Run the server
uv run server.py

# Run tests
uv run python test_mcp_server.py

๐Ÿ“š Documentation

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

๐Ÿ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgements

Developed by Fรกbio Ferreira (@fabiomlferreira).

Enhanced by Pau Oliva (@pof) with ideas from Tommy Tong's interactive-mcp.


Note: This is a simplified version focused on core interactive feedback functionality. Advanced features like image processing and MCP Sampling are not included due to current limitations in Cursor's MCP support.

About

Interactive User Feedback MCP

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 100.0%