A secure MCP (Model Context Protocol) server for ComfyUI. Enables AI assistants like Claude to generate images, run workflows, and manage jobs through ComfyUI — with built-in security controls.
ComfyUI's API is powerful but permissive — custom nodes can execute arbitrary code, file paths are accepted without validation, and there's no built-in rate limiting or audit trail. When exposing this API to an AI assistant via MCP, those gaps become security risks.
This server adds five security layers between the AI assistant and ComfyUI:
| Layer | What it does |
|---|---|
| Workflow Inspector | Parses every workflow before execution, extracts node types, flags dangerous patterns (eval, exec, __import__, subprocess). Configurable audit-only or enforcement mode. |
| Path Sanitizer | Validates all filenames, subfolders, and URL path segments — blocks path traversal (../), null bytes, percent-encoded attacks, absolute paths, and disallowed file extensions. |
| Rate Limiter | Token-bucket rate limiting per tool category to prevent runaway loops. |
| Audit Logger | Structured JSON logging of every operation with automatic redaction of sensitive fields (tokens, passwords). |
| Selective API Surface | Only exposes safe ComfyUI endpoints. Dangerous endpoints (/userdata, /free, /users) are never proxied. /system_stats is called internally by get_system_info but only a strict whitelist (GPU VRAM, queue counts, version) is returned. |
When wait=True is passed to generate_image or run_workflow, the server connects to ComfyUI's WebSocket to track execution in real time — reporting step progress, current node, and output files when complete. If the WebSocket connection fails, it automatically falls back to HTTP polling. Use get_progress to check status of any job at any time.
# Install
pip install comfyui-mcp # or: git clone + uv sync
# Add to Claude Code (plugin with slash commands, skills, and security hook)
claude plugin install github:hybridindie/comfyui_mcp
# Or add to any MCP client via uvx
# See "Setup" section below for Claude Desktop, VS Code, Cursor, and morePrerequisites: Python 3.12+, a running ComfyUI instance.
Set COMFYUI_URL if ComfyUI isn't at http://localhost:8188:
export COMFYUI_URL="http://your-gpu-server:8188"| Tool | Description |
|---|---|
generate_image |
Text-to-image using a built-in workflow. Params: prompt, negative_prompt, width, height, steps, cfg, model. Set wait=True to block until complete and return outputs. |
transform_image |
Image-to-image transformation. Params: image (filename), prompt, negative_prompt, strength (0.0-1.0), steps, cfg, model. Input must be uploaded via upload_image first. |
inpaint_image |
Inpaint masked regions of an image. Params: image, mask (filenames), prompt, negative_prompt, strength, steps, cfg, model. Both files must be uploaded first. |
upscale_image |
Upscale an image using a model-based upscaler. Params: image (filename), upscale_model (default: RealESRGAN_x4plus.pth). |
run_workflow |
Submit arbitrary ComfyUI workflow JSON. Inspected for dangerous nodes before execution. Set wait=True to block until complete and return outputs. |
summarize_workflow |
Summarize a workflow's structure, data flow, models, and parameters. Supports format="text" (default) or format="mermaid" for diagram markup. |
create_workflow |
Create a workflow from templates including txt2img/img2img/upscale/inpaint, txt2vid_animatediff/txt2vid_wan, controlnet_canny/controlnet_depth/controlnet_openpose, ip_adapter, lora_stack, face_restore, flux_txt2img, and sdxl_txt2img. |
modify_workflow |
Apply batch operations (add_node, remove_node, set_input, connect, disconnect) to a workflow. |
validate_workflow |
Validate workflow structure, server compatibility, and security. |
| Tool | Description |
|---|---|
get_queue |
Get current execution queue state. |
get_job |
Check status of a job by prompt_id. |
cancel_job |
Cancel a running or queued job. |
interrupt |
Interrupt the currently executing workflow. |
get_queue_status |
Get detailed queue status including running and pending prompts. |
clear_queue |
Clear pending and/or running items from the queue. |
get_progress |
Get execution progress for a workflow by prompt_id. Returns status, queue position, and outputs. |
| Tool | Description |
|---|---|
list_models |
List available models by folder (checkpoints, loras, vae, etc.). |
list_nodes |
List all available node types. |
get_node_info |
Get detailed info about a specific node type. |
list_workflows |
List saved workflow templates. |
list_extensions |
List available ComfyUI extensions. |
get_server_features |
Get ComfyUI server features and capabilities. |
list_model_folders |
List available model folder types. |
get_model_metadata |
Get metadata for a specific model file. |
audit_dangerous_nodes |
Scan all installed nodes to identify potentially dangerous ones. |
get_system_info |
Sanitized GPU VRAM, queue depth, and ComfyUI version (whitelist-filtered from /system_stats). |
| Tool | Description |
|---|---|
get_history |
Browse execution history (read-only). |
| Tool | Description |
|---|---|
search_models |
Search HuggingFace or CivitAI for models. Returns name, download URL, size, and stats. |
download_model |
Download a model via ComfyUI-Model-Manager. URL and extension validated. |
get_download_tasks |
Check status of active model downloads (progress, speed, status). |
cancel_download |
Cancel or clean up a model download task. |
get_model_presets |
Return recommended sampler/scheduler/steps/CFG defaults for a model family. |
get_prompting_guide |
Return model-family prompt engineering tips and negative prompt guidance. |
Requires: ComfyUI-Model-Manager installed in your ComfyUI instance. Download tools are gated behind lazy detection — if Model Manager is not installed, these tools return a helpful error message.
search_modelsworks without it.
Model Manager tracks downloads as tasks. After a download completes, the task remains in the list with status: "pause" and progress: 100 — this is upstream Model Manager behavior. Call cancel_download to remove it:
download_model(url="...", folder="checkpoints", filename="model.safetensors")
→ { "taskId": "abc123", ... }
get_download_tasks()
→ { "tasks": [{ "taskId": "abc123", "status": "pause", "progress": 100, ... }] }
cancel_download(task_id="abc123")
→ { "success": true, ... }
The download_model tool always sends a previewFile field (required by Model Manager even when empty). Omitting it causes the server to silently fail and delete the task.
| Tool | Description |
|---|---|
search_custom_nodes |
Search the ComfyUI Manager registry for custom nodes by name, description, or author. Returns ID, name, description, author, and install status. |
install_custom_node |
Install a custom node pack from the registry. Set restart=True to restart ComfyUI and run an automatic security audit on all installed nodes. |
uninstall_custom_node |
Uninstall a custom node pack. Set restart=True to restart ComfyUI afterward. |
update_custom_node |
Update a custom node pack to the latest version. Set restart=True to restart and audit. |
get_custom_node_status |
Check the custom node operation queue status (total tasks, completed, in progress, processing state). |
Requires: ComfyUI Manager installed in your ComfyUI instance. Tools are gated behind lazy detection — if ComfyUI Manager is not installed, these tools return a helpful error message.
| Tool | Description |
|---|---|
upload_image |
Upload a base64-encoded image to ComfyUI's input directory. Path-sanitized. |
get_image |
Download a generated image. Returns base64-encoded data URI. Path-sanitized. |
list_outputs |
List generated output filenames from history. |
upload_mask |
Upload a mask image to ComfyUI's input directory. Path-sanitized. |
get_workflow_from_image |
Extract embedded workflow and prompt metadata from a ComfyUI-generated PNG. |
These ComfyUI endpoints are never proxied due to security risks:
/userdata— arbitrary file read/write/free— unload models (DoS vector)/users— user management/historyPOST — delete history
/system_stats is called internally only by get_system_info, which applies a strict whitelist and never forwards the raw response.
| Threat | Impact | Mitigation |
|---|---|---|
| Arbitrary code execution via workflow nodes | Critical | Workflow inspector (audit/enforce mode) |
| Path traversal via file operations | High | Path sanitizer blocks .., null bytes, encoded attacks, absolute paths |
| Denial of service via request flooding | Medium | Token-bucket rate limiter per tool category |
| Credential leakage in logs | Medium | Automatic redaction of token, password, secret, api_key, authorization |
| Information disclosure via API | Low | Dangerous endpoints (/userdata, /free) never proxied; /system_stats whitelist-filtered by get_system_info |
| MITM on ComfyUI connection | Medium | Configurable TLS verification |
Workflow Inspector (security/inspector.py)
- Parses workflow JSON, extracts node types, checks against configurable blocklist
- Recursive pattern matching for
__import__(),eval(),exec(),os.system(),subprocessin all input values (including nested dicts/lists) - Audit mode: logs warnings, allows execution. Enforce mode: blocks unapproved nodes
- Limitation: static blocklist can be bypassed with obfuscation or unknown custom nodes
Path Sanitizer (security/sanitizer.py)
- Validates filenames, subfolders, and URL path segments: blocks path traversal, null bytes, absolute paths, control characters
- URL path segment validation on discovery tools (
list_models,get_model_metadata) prevents folder/filename injection - Allowlist-based extension filtering (default:
.png,.jpg,.jpeg,.webp,.gif,.json) - Handles percent-encoded inputs (URL decoding before validation)
- Enforces max upload size (default 50MB), max filename length (255 chars)
Rate Limiter (security/rate_limit.py)
- Token-bucket per tool category: workflow (10/min), generation (10/min), file_ops (30/min), read_only (60/min)
- In-memory only (resets on restart, no distributed support)
HTTP Client (client.py)
- Configurable TLS verification, connect/read timeouts
- Retries on connection errors with backoff (3 retries default). HTTP 4xx/5xx errors raised immediately (no retry)
WebSocket Progress (progress.py)
- On-demand WebSocket connections for real-time execution tracking (step progress, current node, outputs)
- Automatic HTTP polling fallback if WebSocket connection fails
- TLS/SSL passthrough for secure ComfyUI connections
- Per-prompt event filtering (ignores events from other concurrent jobs)
Every workflow is inspected and logged, but nothing is blocked. Use this during development to understand what nodes your workflows use.
security:
mode: "audit"Audit log entries look like:
{
"timestamp": "2026-02-25T14:30:00+00:00",
"tool": "run_workflow",
"action": "inspected",
"nodes_used": ["KSampler", "CLIPTextEncode", "VAEDecode", "SaveImage"],
"warnings": []
}When a dangerous node is detected, warnings are included in the tool response:
Workflow submitted. prompt_id: abc123
⚠️ Warnings detected:
- Dangerous node type: ExecutePython
- Suspicious input in node 5 (ExecutePython), field 'code'
The MCP instructions tell the LLM to inform users and ask for confirmation before proceeding when warnings are present.
Use the audit_dangerous_nodes tool to scan your ComfyUI installation for potentially dangerous nodes:
audit_dangerous_nodes() → {
"total_nodes": 456,
"dangerous": {
"count": 12,
"nodes": [
{"class": "ExecutePython", "reason": "Name matches pattern: \\bexec\\b"},
{"class": "RunPython", "reason": "Name matches pattern: \\brunpython\\b"},
{"class": "ShellCommand", "reason": "Name matches pattern: \\bshell\\b"}
]
},
"suspicious": {...}
}
Add these to your config:
security:
mode: "audit"
dangerous_nodes:
- "ExecutePython" # from audit_dangerous_nodes
- "RunPython"
- "ShellCommand"
# ... other nodes found by auditOnly explicitly approved nodes can run. Any workflow containing an unapproved node is rejected.
security:
mode: "enforce"
allowed_nodes:
- "KSampler"
- "CheckpointLoaderSimple"
- "CLIPTextEncode"
- "VAEDecode"
- "EmptyLatentImage"
- "SaveImage"
- "LoadImage"
- "LoraLoader"Tip: Use audit_dangerous_nodes to identify dangerous nodes, run workflows in audit mode to see which nodes you use, then switch to enforce mode with that allowlist.
All tool invocations are logged as JSON lines to ~/.comfyui-mcp/audit.log:
# Watch the audit log in real time
tail -f ~/.comfyui-mcp/audit.log | python -m json.tool
# Find all workflows that used dangerous nodes
grep '"warnings":\[' ~/.comfyui-mcp/audit.log | grep -v '"warnings":\[\]'Sensitive fields (token, password, secret, api_key, authorization) are automatically redacted from log entries.
For production, run behind a reverse proxy (nginx, Traefik) to add TLS termination, authentication, and CSP headers. No PII is collected. No external telemetry.
flowchart TB
subgraph Client["LLM Client"]
MC[Claude / AI Assistant]
end
subgraph MCP["ComfyUI MCP Server"]
CONFIG[Config<br/>YAML/env]
AL[Audit Logger<br/>JSON logs]
subgraph Security["Security Layers"]
WI[Workflow Inspector<br/>Dangerous nodes<br/>Suspicious input]
PS[Path Sanitizer<br/>Traversal block<br/>Extension filter]
RL[Rate Limiter<br/>Token-bucket]
end
subgraph Tools["Tool Groups"]
TG[generation.py<br/>jobs.py<br/>discovery.py<br/>history.py<br/>files.py]
end
API[ComfyUI Client<br/>httpx]
WS[WebSocket Progress<br/>websockets]
end
subgraph ComfyUI["ComfyUI Server"]
CS[REST API<br/>port 8188]
CWS[WebSocket<br/>/ws]
end
MC <--MCP--> MCP
CONFIG --> MCP
AL --> MCP
MCP --> Security
Security --> Tools
Tools --> API
Tools --> WS
API --httpx--> CS
WS --websockets--> CWS
| Component | File | Responsibility |
|---|---|---|
| Server | server.py |
Entry point, wires components, registers tools |
| Config | config.py |
Pydantic settings, YAML loading, env overrides |
| Client | client.py |
Async HTTP client for ComfyUI REST API |
| Progress | progress.py |
WebSocket progress tracking with HTTP polling fallback |
| Audit | audit.py |
Structured JSON logging with redaction |
| Workflow Inspector | security/inspector.py |
Node type detection, dangerous pattern matching |
| Node Auditor | security/node_auditor.py |
Scans installed nodes for dangerous patterns |
| Path Sanitizer | security/sanitizer.py |
Path traversal, extension filtering |
| Rate Limiter | security/rate_limit.py |
Token-bucket per tool category |
| Download Validator | security/download_validator.py |
URL domain/path and extension validation for downloads |
| Model Checker | security/model_checker.py |
Proactive missing model detection in workflows |
| Model Manager | model_manager.py |
Lazy detection of ComfyUI-Model-Manager availability |
Configuration snippets for connecting the MCP server to your AI assistant. Each environment supports uvx (recommended) and Docker variants.
Docker note: All Docker snippets use
-i(required for stdio),--rm(auto-remove on exit), andhost.docker.internalto reach the host machine from inside the container. On Linux, replace-e COMFYUI_URL=http://host.docker.internal:8188with--network host -e COMFYUI_URL=http://localhost:8188. The imageghcr.io/hybridindie/comfyui_mcp:mainis published on every push to main. Pin to a semver tag for stability.
Plugin install (recommended — includes slash commands, skills, and security hook):
claude plugin install github:hybridindie/comfyui_mcpOr add to .mcp.json in your project:
{
"mcpServers": {
"comfyui": {
"command": "uvx",
"args": ["comfyui-mcp"],
"env": { "COMFYUI_URL": "http://localhost:8188" }
}
}
}Config file: ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows).
uvx:
{
"mcpServers": {
"comfyui": {
"command": "uvx",
"args": ["comfyui-mcp"],
"env": { "COMFYUI_URL": "http://localhost:8188" }
}
}
}Docker:
{
"mcpServers": {
"comfyui": {
"command": "docker",
"args": ["run", "-i", "--rm", "-e", "COMFYUI_URL=http://host.docker.internal:8188", "ghcr.io/hybridindie/comfyui_mcp:main"],
"env": {}
}
}
}Config: .vscode/mcp.json (workspace) or MCP: Open User Configuration command.
uvx:
{
"servers": {
"comfyui": {
"type": "stdio",
"command": "uvx",
"args": ["comfyui-mcp"],
"env": { "COMFYUI_URL": "http://localhost:8188" }
}
}
}Docker:
{
"servers": {
"comfyui": {
"type": "stdio",
"command": "docker",
"args": ["run", "-i", "--rm", "-e", "COMFYUI_URL=http://host.docker.internal:8188", "ghcr.io/hybridindie/comfyui_mcp:main"],
"env": {}
}
}
}Config: .cursor/mcp.json (project) or ~/.cursor/mcp.json (global).
uvx:
{
"mcpServers": {
"comfyui": {
"command": "uvx",
"args": ["comfyui-mcp"],
"env": { "COMFYUI_URL": "http://localhost:8188" }
}
}
}Docker:
{
"mcpServers": {
"comfyui": {
"command": "docker",
"args": ["run", "-i", "--rm", "-e", "COMFYUI_URL=http://host.docker.internal:8188", "ghcr.io/hybridindie/comfyui_mcp:main"],
"env": {}
}
}
}Config: ~/.codeium/windsurf/mcp_config.json.
uvx:
{
"mcpServers": {
"comfyui": {
"command": "uvx",
"args": ["comfyui-mcp"],
"env": { "COMFYUI_URL": "http://localhost:8188" }
}
}
}Docker:
{
"mcpServers": {
"comfyui": {
"command": "docker",
"args": ["run", "-i", "--rm", "-e", "COMFYUI_URL=http://host.docker.internal:8188", "ghcr.io/hybridindie/comfyui_mcp:main"],
"env": {}
}
}
}Config: .continue/config.yaml (project) or global config.
uvx:
mcpServers:
- name: ComfyUI
command: uvx
args:
- comfyui-mcp
env:
COMFYUI_URL: "http://localhost:8188"Docker:
mcpServers:
- name: ComfyUI
command: docker
args:
- run
- "-i"
- "--rm"
- "-e"
- "COMFYUI_URL=http://host.docker.internal:8188"
- "ghcr.io/hybridindie/comfyui_mcp:main"Config: ~/.config/opencode/opencode.json (global) or opencode.json (project root).
uvx:
{
"mcp": {
"comfyui": {
"type": "local",
"command": "uvx",
"args": ["comfyui-mcp"],
"env": { "COMFYUI_URL": "http://localhost:8188" }
}
}
}Docker:
{
"mcp": {
"comfyui": {
"type": "local",
"command": "docker",
"args": ["run", "-i", "--rm", "-e", "COMFYUI_URL=http://host.docker.internal:8188", "ghcr.io/hybridindie/comfyui_mcp:main"],
"env": {}
}
}
}Open WebUI supports MCP via Streamable HTTP only — not stdio. Use MCPO to bridge:
uvx mcpo -- uvx comfyui-mcpIn Open WebUI, add the MCPO endpoint (default http://localhost:8000) as an MCP server with type "MCP (Streamable HTTP)".
Alternative: SSE mode (unverified)
Enable SSE transport in ~/.comfyui-mcp/config.yaml:
transport:
sse:
enabled: true
host: "0.0.0.0"
port: 8080Open WebUI may accept this at http://<host>:8080/sse, but SSE and Streamable HTTP are different transports. Test before relying on this path.
Config file: ~/.comfyui-mcp/config.yaml
comfyui:
url: "http://127.0.0.1:8188" # ComfyUI server URL
tls_verify: true # TLS certificate verification
timeout_connect: 30 # Connection timeout (seconds)
timeout_read: 300 # Read timeout (seconds)
security:
mode: "audit" # "audit" (log only) or "enforce" (block unapproved)
allowed_nodes: [] # Enforce mode: only these nodes can run
dangerous_nodes: # Always flagged in audit log (showing subset)
- "Terminal" # comfyui-colab: shell via subprocess
- "interpreter_tool" # comfyui_LLM_party: exec/eval
- "KY_Eval_Python" # ComfyUI-KYNode: exec Python
- "Image Send HTTP" # was-node-suite: arbitrary HTTP
- "Load Text File" # was-node-suite: reads arbitrary files
- "Save Text File" # was-node-suite: writes arbitrary files
# ... see config.py _DEFAULT_DANGEROUS_NODES for the full list
max_upload_size_mb: 50
allowed_extensions:
- ".png"
- ".jpg"
- ".jpeg"
- ".webp"
- ".gif"
- ".json"
rate_limits: # Requests per minute
workflow: 10
generation: 10
file_ops: 30
read_only: 60
model_search:
huggingface_token: "" # Optional; needed for gated/private HF models
civitai_api_key: "" # Optional; needed for auth-only CivitAI access
max_search_results: 10
logging:
audit_file: "~/.comfyui-mcp/audit.log"
transport:
sse:
enabled: false
host: "127.0.0.1"
port: 8080Environment variables override config file values:
| Variable | Overrides |
|---|---|
COMFYUI_URL |
comfyui.url |
COMFYUI_TLS_VERIFY |
comfyui.tls_verify |
COMFYUI_TIMEOUT_CONNECT |
comfyui.timeout_connect |
COMFYUI_TIMEOUT_READ |
comfyui.timeout_read |
COMFYUI_SECURITY_MODE |
security.mode |
COMFYUI_AUDIT_FILE |
logging.audit_file |
COMFYUI_HUGGINGFACE_TOKEN |
model_search.huggingface_token |
COMFYUI_CIVITAI_API_KEY |
model_search.civitai_api_key |
COMFYUI_MAX_SEARCH_RESULTS |
model_search.max_search_results |
COMFYUI_ALLOWED_DOWNLOAD_DOMAINS |
security.allowed_download_domains |
search_models and download_model work without API keys for many public models. Add keys when you need access to gated/private resources or higher provider limits.
Set them in config:
model_search:
huggingface_token: "hf_xxx"
civitai_api_key: "xxx"Or via environment variables:
export COMFYUI_HUGGINGFACE_TOKEN="hf_xxx"
export COMFYUI_CIVITAI_API_KEY="xxx"Security notes:
- Prefer environment variables in production so secrets do not live in files committed to git.
- Audit logs redact sensitive fields (
token,api_key, etc.), but avoid printing secrets in shell history when possible.
A pre-built Docker image is published to the GitHub Container Registry. No need to clone the repo.
docker pull ghcr.io/hybridindie/comfyui_mcp:mainThe container runs uv run comfyui-mcp as its entrypoint, communicating over stdin/stdout (stdio). This makes it compatible with Claude Code, Claude Desktop, and any MCP client. Config is read from /root/.comfyui-mcp/config.yaml inside the container — mount your local config directory to provide it, or use environment variables.
# Using the hosted image
docker run --rm -i \
-e COMFYUI_URL=http://host.docker.internal:8188 \
-v ~/.comfyui-mcp:/root/.comfyui-mcp:ro \
ghcr.io/hybridindie/comfyui_mcp:main
# Or build and run locally
docker build -t comfyui-mcp .
docker run --rm -i \
-e COMFYUI_URL=http://host.docker.internal:8188 \
-v ~/.comfyui-mcp:/root/.comfyui-mcp:ro \
comfyui-mcpLinux users: Add
--add-host=host.docker.internal:host-gatewayif usinghost.docker.internal.
A docker-compose.yml is included for persistent deployments:
# Start
COMFYUI_URL=http://your-comfyui:8188 docker compose up -d
# View logs
docker compose logs -f comfyui-mcpThe compose file mounts ./config.yaml and persists audit logs to a named volume:
services:
comfyui-mcp:
build: .
image: comfyui-mcp:latest
container_name: comfyui-mcp
environment:
- COMFYUI_URL=${COMFYUI_URL:-http://comfyui:8188}
- COMFYUI_SECURITY_MODE=${COMFYUI_SECURITY_MODE:-audit}
volumes:
- ./config.yaml:/root/.comfyui-mcp/config.yaml:ro
- comfyui-mcp-data:/root/.comfyui-mcp/logs
restart: unless-stopped
volumes:
comfyui-mcp-data:src/comfyui_mcp/
├── server.py # MCP server entry point, wires all components
├── config.py # Pydantic settings, YAML loading, env overrides
├── client.py # Async HTTP client for ComfyUI API
├── progress.py # WebSocket progress tracking with HTTP polling fallback
├── audit.py # Structured JSON audit logger
├── model_manager.py # Lazy Model Manager detection and validation
├── security/
│ ├── inspector.py # Workflow node inspection (audit/enforce)
│ ├── node_auditor.py # Scans installed nodes for dangerous patterns
│ ├── sanitizer.py # File path validation
│ ├── rate_limit.py # Token-bucket rate limiter
│ ├── download_validator.py # URL/extension validation for model downloads
│ └── model_checker.py # Proactive model availability checking
├── workflow/
│ ├── templates.py # Built-in workflow templates (txt2img, img2img, upscale, etc.)
│ ├── operations.py # Workflow graph operations (add/remove nodes, connect, etc.)
│ └── validation.py # Workflow analysis and validation
└── tools/
├── generation.py # generate_image, run_workflow, summarize_workflow
├── workflow.py # create_workflow, modify_workflow, validate_workflow
├── jobs.py # get_queue, get_job, cancel_job, interrupt, get_progress
├── discovery.py # list_models, list_nodes, audit_dangerous_nodes, etc.
├── history.py # get_history
├── files.py # upload_image, get_image, list_outputs, upload_mask, get_workflow_from_image
└── models.py # search_models, download_model, get_download_tasks, cancel_download
uv sync
uv run pytest -vVerify connectivity, Model Manager availability, and download lifecycle against a running ComfyUI server:
# Full test (connectivity + folder listing + download task lifecycle)
uv run python scripts/smoke_test.py
# Quick connectivity + folder check only
uv run python scripts/smoke_test.py --no-download
# Target a different server
uv run python scripts/smoke_test.py --url http://localhost:8188The download probe uses a tiny (~520 KB) safetensors file from hf-internal-testing/tiny-random-bert. The file is created with a timestamped name and cleaned up automatically on every run.
MIT