Simple MCP Server to enable a human-in-the-loop workflow in AI-assisted development tools like Cursor, Cline and Windsurf. This server allows you to easily provide feedback directly to the AI agent, bridging the gap between AI and you.
Note: This server is designed to run locally alongside the MCP client (e.g., Claude Desktop, Cursor), as it needs direct access to the user's operating system to display the feedback interface.
- ๐ฌ Interactive Feedback: Ask clarifying questions and get user responses
- ๐ Web Interface: Modern, responsive web-based interface with Markdown support
- ๐ผ๏ธ GUI Interface: Native desktop application (when available)
- ๐ฑ Multi-platform: Works on Windows, macOS, Linux (including ARM64)
- ๐ฏ Predefined Options: Support for multiple-choice questions with custom options
- โจ๏ธ Keyboard Shortcuts: Quick submission with Ctrl+Enter
In environments like Cursor, every prompt you send to the LLM is treated as a distinct request โ and each one counts against your monthly limit (e.g. 500 premium requests). This becomes inefficient when you're iterating on vague instructions or correcting misunderstood output, as each follow-up clarification triggers a full new request.
This MCP server introduces a workaround: it allows the model to pause and request clarification before finalizing the response. Instead of completing the request, the model triggers a tool call (interactive_feedback) that opens an interactive feedback window. You can then provide more detail or ask for changes โ and the model continues the session, all within a single request.
Under the hood, it's just a clever use of tool calls to defer the completion of the request. Since tool calls don't count as separate premium interactions, you can loop through multiple feedback cycles without consuming additional requests.
Essentially, this helps your AI assistant ask for clarification instead of guessing, without wasting another request. That means fewer wrong answers, better performance, and less wasted API usage.
- ๐ฐ Reduced Premium API Calls: Avoid wasting expensive API calls generating code based on guesswork.
- โ Fewer Errors: Clarification before action means less incorrect code and wasted time.
- โฑ๏ธ Faster Cycles: Quick confirmations beat debugging wrong guesses.
- ๐ฎ Better Collaboration: Turns one-way instructions into a dialogue, keeping you in control.
This server exposes the following tool via the Model Context Protocol (MCP):
interactive_feedback: Asks the user a question and returns their answer. Can display predefined options for quick selection.
# Simple question
interactive_feedback(
message="Do you approve this code change?"
)
# Multiple choice question
interactive_feedback(
message="Which approach should we use?",
predefined_options=["Option A", "Option B", "Option C"]
)
# Markdown-formatted question
interactive_feedback(
message="""## Code Review
Please review the following changes:
- Added error handling
- Improved performance
- Updated documentation
**Do you want to proceed?**"""
)-
Prerequisites:
- Python 3.11 or newer
- uv (Python package manager). Install it with:
- Windows:
pip install uv - Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh - macOS:
brew install uv
- Windows:
-
Get the code:
- Clone this repository:
git clone https://github.com/pauoliva/interactive-feedback-mcp.git cd interactive-feedback-mcp - Or download the source code.
- Clone this repository:
-
Install dependencies:
uv sync
This MCP server automatically adapts to your environment and supports multiple interface modes:
- ๐ Web Mode: Modern web-based interface with full Markdown support, beautiful styling, and responsive design (recommended)
- ๐ผ๏ธ GUI Mode: Native desktop application using PySide6 (when available)
The server automatically chooses the best interface, but you can override this with the INTERACTIVE_FEEDBACK_UI environment variable:
# Force web interface (recommended)
export INTERACTIVE_FEEDBACK_UI=web
# Force GUI interface
export INTERACTIVE_FEEDBACK_UI=gui
# Auto-detect (default, prioritizes web)
export INTERACTIVE_FEEDBACK_UI=autoRecommendation: Use web mode for the best experience - it provides beautiful Markdown rendering, responsive design, and works consistently across all platforms.
- โ Windows: Full Web and GUI support
- โ macOS: Full Web and GUI support
- โ Linux x86_64: Full Web and GUI support
- โ Linux ARM64 (Raspberry Pi, etc.): Web support, GUI support with compatible PySide6 version (6.6.x+)
Note for ARM64 Linux users: If you encounter PySide6 compatibility issues, the server will automatically fall back to Web mode. For GUI support on ARM64, ensure you have PySide6 6.6.0 or newer.
Add the following configuration to your Cursor MCP settings:
{
"mcpServers": {
"interactive-feedback": {
"command": "uv",
"args": [
"--directory",
"/path/to/interactive-feedback-mcp",
"run",
"server.py"
]
}
}
}Add the following to your claude_desktop_config.json:
{
"mcpServers": {
"interactive-feedback": {
"command": "uv",
"args": [
"--directory",
"/path/to/interactive-feedback-mcp",
"run",
"server.py"
],
"timeout": 600,
"autoApprove": [
"interactive_feedback"
]
}
}
}Remember to change the /path/to/interactive-feedback-mcp path to the actual path where you cloned the repository on your system.
Add the following to your AI assistant's custom rules (in Cursor Settings > Rules > User Rules):
If requirements or instructions are unclear use the tool interactive_feedback to ask clarifying questions to the user before proceeding, do not make assumptions. Whenever possible, present the user with predefined options through the interactive_feedback MCP tool to facilitate quick decisions.
Whenever you're about to complete a user request, call the interactive_feedback tool to request user feedback before ending the process. If the feedback is empty you can end the request and don't call the tool in loop.
This will ensure your AI assistant always uses this MCP server to request user feedback when the prompt is unclear and before marking the task as completed.
Test the MCP server functionality:
uv run python test_mcp_server.pyTest the web interface directly:
uv run python feedback_web.py --prompt "Test question" --output-file result.jsoninteractive-feedback-mcp/
โโโ server.py # Main MCP server
โโโ feedback_web.py # Web interface implementation
โโโ feedback_ui.py # GUI interface implementation
โโโ test_mcp_server.py # MCP protocol tests
โโโ DEVELOPMENT_NOTES.md # Development experience summary
โโโ pyproject.toml # Project configuration
โโโ README.md # This file
- All user interfaces run locally
- No data is transmitted to external servers
- Temporary files are automatically cleaned up
- User approval required for all feedback requests
- Python 3.10+
- FastMCP 2.0+
- Optional: PySide6 (for GUI interface)
# Install with development dependencies
uv sync --extra gui
# Run the server
uv run server.py
# Run tests
uv run python test_mcp_server.py- Development Notes - Detailed development experience and best practices
- MCP Official Documentation
- Cursor MCP Guide
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
Developed by Fรกbio Ferreira (@fabiomlferreira).
Enhanced by Pau Oliva (@pof) with ideas from Tommy Tong's interactive-mcp.
Note: This is a simplified version focused on core interactive feedback functionality. Advanced features like image processing and MCP Sampling are not included due to current limitations in Cursor's MCP support.