A programmable MCP proxy that takes any existing MCP server and a natural language transform prompt, and produces a new MCP server whose tools are reshaped by that prompt.
- Rename tools, reformat outputs, change schemas
- Hide tools you don't need
- Compose multiple upstream tools into new synthetic tools
- Chain instances via Unix pipes into multi-stage transform pipelines
All without modifying the original server.
# Run directly with npx
npx mcpblox --upstream "your-mcp-server" --prompt "your transform"
# Or install globally
npm install -g mcpblox# 1. Proxy an MCP server unchanged (transparent pass-through)
mcpblox --upstream "npx @modelcontextprotocol/server-filesystem /tmp" --api-key $ANTHROPIC_API_KEY
# 2. Preview what transforms the LLM would apply (no server started)
mcpblox \
--upstream "npx @modelcontextprotocol/server-filesystem /tmp" \
--prompt "Rename read_file to cat and hide write_file" \
--api-key $ANTHROPIC_API_KEY \
--dry-run
# 3. Run with transforms applied
mcpblox \
--upstream "npx @modelcontextprotocol/server-filesystem /tmp" \
--prompt "Rename read_file to cat and hide write_file" \
--api-key $ANTHROPIC_API_KEY
# 4. Point any MCP host at http://localhost:8000/mcp
curl http://localhost:8000/health
# 5. Chain transforms via Unix pipes
mcpblox --upstream "npx @modelcontextprotocol/server-filesystem /tmp" --prompt "Hide write_file" \
| mcpblox --prompt "Rename read_file to cat" \
| mcpblox --prompt "Format outputs as markdown"┌──────────┐ ┌──────────────────────────────────────────┐ ┌───────────┐
│ │ │ mcpblox │ │ │
│ MCP │◄────►│ ┌────────┐ ┌───────────┐ ┌─────────┐ │◄────►│ Upstream │
│ Host │ HTTP │ │Exposed │ │ Transform │ │Upstream │ │stdio/│ MCP │
│ │ │ │Server │──│ Engine │──│Client │ │ HTTP │ Server │
└──────────┘ │ └────────┘ └─────┬─────┘ └─────────┘ │ └───────────┘
│ │ │
│ ┌─────▼─────┐ │
│ │ LLM │ │
│ │ (startup │ │
│ │ codegen) │ │
│ └───────────┘ │
└──────────────────────────────────────────┘
At startup, mcpblox:
- Connects to the upstream MCP server and discovers its tools
- Sends your transform prompt + tool definitions to an LLM
- The LLM produces a transform plan (which tools to modify, hide, pass through, or compose into new
synthetictools) - For each modified tool, the LLM generates JavaScript transform functions
- Generated code runs in a sandboxed
vmcontext (no filesystem/network access) - Results are cached — subsequent startups with the same prompt skip the LLM entirely
At runtime, tool calls flow through the transform pipeline: input args are transformed, the upstream tool is called, and the output is transformed before returning to the host. Pass-through tools are proxied directly with no overhead.
mcpblox [options]
Upstream (required unless stdin is a pipe):
--upstream <command> Upstream MCP server as stdio command
e.g., "npx @modelcontextprotocol/server-filesystem /tmp"
--upstream-url <url> Upstream MCP server as HTTP/SSE URL
--upstream-token <token> Bearer token for HTTP upstream (env: MCP_UPSTREAM_TOKEN)
Transform:
--prompt <text> Transform prompt (inline)
--prompt-file <path> Transform prompt from file
LLM:
--provider <name> LLM provider: anthropic | openai (default: anthropic)
--model <id> LLM model ID (default: claude-sonnet-4-20250514 / gpt-4o)
--api-key <key> LLM API key (env: ANTHROPIC_API_KEY | OPENAI_API_KEY)
Server:
--port <number> HTTP server port (default: 8000, or 0 for OS-assigned when piped)
Cache:
--cache-dir <path> Cache directory (default: .mcpblox-cache)
--no-cache Disable caching, regenerate on every startup
Other:
--dry-run Show the transform plan as JSON without starting the server
--verbose Verbose logging (generated code, cache keys, tool call details)
Without --prompt, mcpblox runs as a transparent proxy — all tools pass through unchanged.
Rename and restructure tools:
mcpblox \
--upstream "npx @mcp/server-github" \
--prompt "Rename search_repositories to find_repos. For list_issues, add a max_results parameter (default 10) that truncates the output."Format outputs:
mcpblox \
--upstream "uvx mcp-server-yfinance" \
--prompt "Format all numeric values in tool outputs with thousand separators and 2 decimal places. Prefix currency values with $."Hide tools you don't need:
mcpblox \
--upstream "npx @modelcontextprotocol/server-filesystem /tmp" \
--prompt "Hide write_file, create_directory, and move_file. Only expose read-only tools."Synthetic tools (compose upstream tools into new ones):
mcpblox \
--upstream "uvx yfinance-mcp" \
--prompt-file period-returns.txt \
--port 18500The prompt creates a get_period_returns tool that calls yfinance_get_price_history four times (for 1-month, 3-month, 6-month, and 12-month periods), parses the results, and returns calculated returns for a given stock ticker — all orchestrated in a single tool call.
Connect to an HTTP/SSE upstream instead of stdio:
# Proxy an already-running MCP server over HTTP
mcpblox --upstream-url http://localhost:3000/mcp --api-key $ANTHROPIC_API_KEY
# With bearer token authentication
mcpblox --upstream-url http://localhost:3000/mcp \
--upstream-token $MCP_TOKEN \
--prompt "Hide admin tools" \
--api-key $ANTHROPIC_API_KEYLoad a complex prompt from a file:
mcpblox \
--upstream "uvx yfinance-mcp" \
--prompt-file transforms.txt \
--api-key $ANTHROPIC_API_KEYChain instances via Unix pipes:
# Each instance reads its upstream URL from stdin and writes its own URL to stdout.
# Only the first instance needs --upstream.
mcpblox --upstream "node stock-server.js" --prompt "Add a max_results param to search" \
| mcpblox --prompt "Format prices as USD with commas" \
| mcpblox --prompt "Add caching hints to descriptions"
# Or feed an upstream URL via echo:
echo "http://localhost:3000/mcp" \
| mcpblox --prompt "Hide admin tools" \
| mcpblox --prompt "Format outputs as markdown"When stdout is a pipe, mcpblox binds to an OS-assigned port and writes its URL (e.g. http://localhost:57403/mcp) to stdout. The next instance reads that URL from stdin. Use --port to override the auto-assigned port.
Chain manually with explicit ports:
# First instance: modify tool schemas
mcpblox --upstream "node stock-server.js" --prompt "Add a max_results param to search" --port 8001 &
# Second instance: format the output of the first
mcpblox --upstream-url http://localhost:8001/mcp --prompt "Format prices as USD with commas" --port 8002| Endpoint | Method | Description |
|---|---|---|
/mcp |
POST | MCP protocol (StreamableHTTP) |
/health |
GET | Health check — returns {"status":"ok","tools":<count>} |
Transforms are cached to disk in .mcpblox-cache/ (configurable with --cache-dir). The cache key is the hash of your transform prompt combined with the hash of the upstream tool schemas. If either changes, the cache auto-invalidates.
Use --no-cache to force regeneration. Use --dry-run to preview the plan without starting the server.
LLM-generated transform code runs in a restricted Node.js vm context with no access to the filesystem, network, process environment, or module system. The sandbox provides only data-manipulation primitives (JSON, Math, String, Array, etc.) with a 5-second execution timeout for input/output transforms and a 30-second timeout for synthetic tool orchestration.
Synthetic tool orchestration code receives a callTool bridge function that restricts calls to only the upstream tools declared in the tool's plan — it cannot call arbitrary tools or access anything outside the sandbox.
Note: Node.js vm is not a full security boundary — it's sufficient for LLM-generated code in a trusted-user context, not for arbitrary untrusted input.