English | δΈζ
β¨ An AIGC solution based on the MCP protocol, seamlessly converting ComfyUI workflows into MCP tools with zero code, empowering LLM and ComfyUI integration.
video_en.mp4
- β 2025-09-03: Architecture refactoring from three services to unified application; added CLI tool support; published to PyPI
- β 2025-08-12: Integrated the LiteLLM framework, adding multi-model support for Gemini, DeepSeek, Claude, Qwen, and more
- β π Full-modal Support: Supports TISV (Text, Image, Sound/Speech, Video) full-modal conversion and generation
- β π§© ComfyUI Ecosystem: Built on ComfyUI, inheriting all capabilities from the open ComfyUI ecosystem
- β π§ Zero-code Development: Defines and implements the Workflow-as-MCP Tool solution, enabling zero-code development and dynamic addition of new MCP Tools
- β ποΈ MCP Server: Based on the MCP protocol, supporting integration with any MCP client (including but not limited to Cursor, Claude Desktop, etc.)
- β π Web Interface: Developed based on the Chainlit framework, inheriting Chainlit's UI controls and supporting integration with more MCP Servers
- β π¦ One-click Deployment: Supports PyPI installation, CLI commands, Docker and other deployment methods, ready to use out of the box
- β βοΈ Simplified Configuration: Uses environment variable configuration scheme, simple and intuitive configuration
- β π€ Multi-LLM Support: Supports multiple mainstream LLMs, including OpenAI, Ollama, Gemini, DeepSeek, Claude, Qwen, and more
Pixelle MCP adopts a unified architecture design, integrating MCP server, web interface, and file services into one application, providing:
- π Web Interface: Chainlit-based chat interface supporting multimodal interaction
- π MCP Endpoint: For external MCP clients (such as Cursor, Claude Desktop) to connect
- π File Service: Handles file upload, download, and storage
- π οΈ Workflow Engine: Automatically converts ComfyUI workflows into MCP tools
Choose the deployment method that best suits your needs, from simple to complex:
π‘ Zero configuration startup, perfect for quick experience and testing
# Start with one command, no system installation required
uvx pixelle@latest
π View uvx CLI Reference β
# Install to system
pip install -U pixelle
# Start service
pixelle
π View pip CLI Reference β
After startup, it will automatically enter the configuration wizard to guide you through ComfyUI connection and LLM configuration.
π‘ Supports custom workflows and secondary development
git clone https://github.com/AIDC-AI/Pixelle-MCP.git
cd Pixelle-MCP
# Interactive mode (recommended)
uv run pixelle
π View Complete CLI Reference β
# Copy example workflows to data directory (run this in your desired project directory)
cp -r workflows/* ./data/custom_workflows/
π‘ Suitable for production environments and containerized deployment
git clone https://github.com/AIDC-AI/Pixelle-MCP.git
cd Pixelle-MCP
# Create environment configuration file
cp .env.example .env
# Edit .env file to configure your ComfyUI address and LLM settings
# Start all services in background
docker compose up -d
# View logs
docker compose logs -f
Regardless of which method you use, after startup you can access via:
- π Web Interface: http://localhost:9004
Default username and password are bothdev
, can be modified after startup - π MCP Endpoint: http://localhost:9004/pixelle/mcp
For MCP clients like Cursor, Claude Desktop to connect
π‘ Port Configuration: Default port is 9004, can be customized via environment variable PORT=your_port
.
On first startup, the system will automatically detect configuration status:
- π§ ComfyUI Connection: Ensure ComfyUI service is running at
http://localhost:8188
- π€ LLM Configuration: Configure at least one LLM provider (OpenAI, Ollama, etc.)
- π Workflow Directory: System will automatically create necessary directory structure
π Need Help? Join community groups for support (see Community section below)
β‘ One workflow = One MCP Tool
-
π Build a workflow in ComfyUI for image Gaussian blur (Get it here), then set the
LoadImage
node's title to$image.image!
as shown below: -
π€ Export it as an API format file and rename it to
i_blur.json
. You can export it yourself or use our pre-exported version (Get it here) -
π Copy the exported API workflow file (must be API format), input it on the web page, and let the LLM add this Tool
-
β¨ After sending, the LLM will automatically convert this workflow into an MCP Tool
-
π¨ Now, refresh the page and send any image to perform Gaussian blur processing via LLM
The steps are the same as above, only the workflow part differs (Download workflow: UI format and API format)
The system supports ComfyUI workflows. Just design your workflow in the canvas and export it as API format. Use special syntax in node titles to define parameters and outputs.
In the ComfyUI canvas, double-click the node title to edit, and use the following DSL syntax to define parameters:
$<param_name>.[~]<field_name>[!][:<description>]
param_name
: The parameter name for the generated MCP tool function~
: Optional, indicates URL parameter upload processing, returns relative pathfield_name
: The corresponding input field in the node!
: Indicates this parameter is requireddescription
: Description of the parameter
Required parameter example:
- Set LoadImage node title to:
$image.image!:Input image URL
- Meaning: Creates a required parameter named
image
, mapped to the node'simage
field
URL upload processing example:
- Set any node title to:
$image.~image!:Input image URL
- Meaning: Creates a required parameter named
image
, system will automatically download URL and upload to ComfyUI, returns relative path
π Note:
LoadImage
,VHS_LoadAudioUpload
,VHS_LoadVideo
and other nodes have built-in functionality, no need to add~
marker
Optional parameter example:
- Set EmptyLatentImage node title to:
$width.width:Image width, default 512
- Meaning: Creates an optional parameter named
width
, mapped to the node'swidth
field, default value is 512
The system automatically infers parameter types based on the current value of the node field:
- π’
int
: Integer values (e.g. 512, 1024) - π
float
: Floating-point values (e.g. 1.5, 3.14) - β
bool
: Boolean values (e.g. true, false) - π
str
: String values (default type)
The system will automatically detect the following common output nodes:
- πΌοΈ
SaveImage
- Image save node - π¬
SaveVideo
- Video save node - π
SaveAudio
- Audio save node - πΉ
VHS_SaveVideo
- VHS video save node - π΅
VHS_SaveAudio
- VHS audio save node
Usually used for multiple outputs Use
$output.var_name
in any node title to mark output:
- Set node title to:
$output.result
- The system will use this node's output as the tool's return value
You can add a node titled MCP
in the workflow to provide a tool description:
- Add a
String (Multiline)
or similar text node (must have a single string property, and the node field should be one of: value, text, string) - Set the node title to:
MCP
- Enter a detailed tool description in the value field
- π Parameter Validation: Optional parameters (without !) must have default values set in the node
- π Node Connections: Fields already connected to other nodes will not be parsed as parameters
- π·οΈ Tool Naming: Exported file name will be used as the tool name, use meaningful English names
- π Detailed Descriptions: Provide detailed parameter descriptions for better user experience
- π― Export Format: Must export as API format, do not export as UI format
Scan the QR codes below to join our communities for latest updates and technical support:
Discord Community | WeChat Group |
---|---|
![]() |
![]() |
We welcome all forms of contribution! Whether you're a developer, designer, or user, you can participate in the project in the following ways:
- π Submit bug reports on the Issues page
- π Please search for similar issues before submitting
- π Describe the reproduction steps and environment in detail
- π Submit feature requests in Issues
- π Describe the feature you want and its use case
- π― Explain how it improves user experience
- π΄ Fork this repo to your GitHub account
- πΏ Create a feature branch:
git checkout -b feature/your-feature-name
- π» Develop and add corresponding tests
- π Commit changes:
git commit -m "feat: add your feature"
- π€ Push to your repo:
git push origin feature/your-feature-name
- π Create a Pull Request to the main repo
- π Python code follows PEP 8 style guide
- π Add appropriate documentation and comments for new features
- π¦ Share your ComfyUI workflows with the community
- π οΈ Submit tested workflow files
- π Add usage instructions and examples for workflows
β€οΈ Sincere thanks to the following organizations, projects, and teams for supporting the development and implementation of this project.
This project is released under the MIT License (LICENSE, SPDX-License-identifier: MIT).