The fastest way to discover and connect Model Context Protocol (MCP) servers.
Quick Start β’ Ecosystem Architecture β’ The Stack
A comprehensive registry and management platform for Model Context Protocol (MCP) services. This monorepo contains both the frontend and backend applications for discovering, managing, and interacting with MCP agents and services.
SlashMCP.com is a platform designed to help developers discover, register, and manage Model Context Protocol services. It provides a user-friendly interface for browsing available MCP agents, viewing their details, and managing service registrations.
- Service Registry: Register and manage MCP services with metadata
- Search & Filter: Find services by name, endpoint, or status
- Service Management: Create, update, and delete service entries
- Service Details: View comprehensive information about each service
- Integration Status System: Three-tier status tracking (Active, Pre-Integration, Offline)
- Automated Integration: Bulk registration and integration scripts for MCP servers
- One-Click Installation: Install STDIO servers directly to Cursor or Claude Desktop with a single click
- Chat Interface: Interact with MCP agents through a chat interface (default landing page)
- Voice Transcription: Real-time voice-to-text using OpenAI Whisper API
- Document Analysis: AI-powered analysis of PDFs, images, and text files using Google Gemini Vision
- Screen Capture: Capture and analyze screen content using browser APIs
- Image Generation: Generate images from natural language using Nano Banana MCP (Gemini-powered)
- Image Display: View generated images directly in chat with automatic blob URL conversion
- STDIO Server Support: Full support for STDIO-based MCP servers (like Nano Banana MCP)
- HTTP Server Support: Support for HTTP-based MCP servers with custom headers
- Kafka Orchestrator: Intelligent tool routing that bypasses Gemini for high-signal queries
- Auto Tool Discovery: Automatic tool discovery for STDIO servers on registration
- Real-time Progress: Server-Sent Events (SSE) for live job progress updates
- Multi-Tier Fallback: Robust API fallback strategy for reliable AI generation
- Modern UI: Built with Next.js and Tailwind CSS for a responsive experience
- Places Search: Find coffee shops, restaurants, record stores, and more with
search_placestool - Weather Data: Get real-time weather, temperature, and forecasts with
lookup_weathertool - Route Planning: Calculate directions and travel times with
compute_routestool - Smart Parameter Formatting: Automatically formats queries for Google Maps API (textQuery for places, location object for weather)
- Enhanced Map Links: Uses place IDs for accurate business name display in Google Maps
- MCP Access Enabled: Requires
gcloud beta services mcp enable mapstools.googleapis.comfor API access
- Intelligent Tool Routing: Automatically routes high-signal queries (like "when is the next concert", "what's the weather in...") directly to appropriate MCP tools without invoking Gemini
- Gemini Quota Protection: Bypasses Gemini API for deterministic queries, saving quota for complex reasoning tasks
- Fast-Path Matching: Keyword and semantic matching routes queries to tools in <50ms
- Weather Query Routing: Routes temperature/weather queries to Google Maps
lookup_weathertool - Places Query Routing: Routes location-based searches to Google Maps
search_placestool - Asynchronous Processing: Kafka-based event-driven architecture for scalable orchestration
- SSE Support: Handles Server-Sent Events (SSE) responses from MCP servers like Exa
- Shared Result Consumer: Always-ready consumer eliminates timeout issues
- Status Endpoint: Check orchestrator health via
/api/orchestrator/status
How it works:
- User query enters via
/api/orchestrator/query - Ingress Gateway normalizes and publishes to
user-requeststopic - MCP Matcher performs fast keyword/semantic matching
- Execution Coordinator invokes the matched tool
- Results published to
orchestrator-resultsand returned to client
Setup:
# Start Kafka (Docker)
docker-compose -f docker-compose.kafka.yml up -d
# Create topics
.\scripts\setup-kafka-topics.ps1
# Enable in backend .env
ENABLE_KAFKA=true
KAFKA_BROKERS=localhost:9092See Kafka Setup Guide and Orchestrator Architecture for details.
- Cursor Deep-Link Support: Install STDIO servers directly to Cursor with automatic deep-link navigation
- Claude Desktop Clipboard: One-click copy-to-clipboard for Claude Desktop configuration
- Smart Client Detection: Automatically disables one-click options for HTTP servers (which require manual setup)
- Permissions Preview: View server capabilities and permissions before installation
- Streamlined UX: Bypasses generic dialog for supported clients, providing instant installation
- Gemini-Powered Image Generation: Full integration with Nano Banana MCP for AI image generation
- Synchronous & Asynchronous Support: Handles both immediate results and async job polling
- Base64 to Blob Conversion: Automatic conversion of large base64 images to blob URLs (fixes 414 errors)
- Image Display in Chat: Generated images display directly in the chat interface
- API Key Management: Easy API key configuration via UI or API
- Quota Error Handling: User-friendly error messages for API quota issues
- New: Heuristic guard prevents accidental image generation unless explicitly requested or overridden (backend route
POST /api/mcp/tools/generate).
- New: Heuristic guard prevents accidental image generation unless explicitly requested or overridden (backend route
- Three-Tier Status System: Active, Pre-Integration, and Offline status tracking
- Automated Integration: Bulk registration scripts for top 20 MCP servers
- Package Verification: Automatic npm package verification for STDIO servers
- Health Checks: HTTP endpoint health monitoring for HTTP servers
- Integration Status Service: Intelligent status determination based on tools, packages, and health
- Integration Scripts:
register-top-20: Bulk register servers from manifestintegrate-top-20: Verify packages and discover toolsverify-integration: Check all server integration statustest-mcp-servers: Test active servers with tool invocation
See MCP Integration Guide for details.
- Full STDIO Protocol: Complete support for STDIO-based MCP servers using JSON-RPC
- Automatic Tool Discovery: Tools are automatically discovered when STDIO servers are registered
- On-Demand Discovery: Tool discovery happens on-demand if tools aren't pre-registered
- Background Discovery: Tool discovery continues in background if initial discovery times out
- Environment Variable Passing: Secure environment variable injection for STDIO processes
- State Machine: Robust state management for STDIO communication (INITIALIZING β INITIALIZED β CALLING β COMPLETE)
- Dynamic Identity Provider: Registry now supports the
/.well-known/mcp-server-identitystandard - Automatic Verification: When a server is published, the registry automatically pings the identity endpoint to verify ownership
- Cryptographic Signatures: Verifies signed metadata from server identity endpoints
- Identity Status: Tracks verification status and metadata for each registered server
- Durable Request Tracking: Monitor long-running async operations across MCP servers
- Task Dashboard: New
/tasksroute provides real-time monitoring of all durable tasks - Status Monitoring: Track task progress, completion status, and errors
- Auto-refresh: Real-time updates with configurable auto-refresh capability
- Task Filtering: Filter tasks by server, status, or type
- Security Scores Overview: View trust scores for all registered servers
- Security Scanning: Background worker that analyzes registered servers for security issues
- npm Audit Integration: Scans dependencies for known vulnerabilities (infrastructure ready)
- LLM-based Code Analysis: AI-powered code scanning for security best practices (infrastructure ready)
- Security Scores: 0-100 scoring system for each server
- Scan Results: Detailed security analysis results stored and accessible via API
- Periodic Scanning: Automated security scans for all active servers
GET /api/tasks- List all durable tasksGET /api/tasks/:id- Get specific task detailsGET /api/tasks/server/:serverId- Get tasks for a serverPOST /api/tasks- Create a new durable taskPATCH /api/tasks/:taskId/progress- Update task progressPOST /api/security/scan/:serverId- Trigger security scanGET /api/security/scores- Get all security scoresGET /api/security/score/:serverId- Get security score for a server
POST /api/orchestrator/query- Submit a query to the intelligent orchestratorGET /api/orchestrator/status- Check orchestrator health and service status
mcp-registry/
βββ app/ # Frontend Next.js application
β βββ chat/ # Chat interface pages
β βββ settings/ # Settings pages
β βββ page.tsx # Main registry page
βββ components/ # React components
β βββ ui/ # Reusable UI components
β βββ ... # Feature components
βββ backend/ # Backend Express API
β βββ src/ # Backend source code
β β βββ server.ts # Express server
β βββ prisma/ # Prisma schema and migrations
β β βββ schema.prisma
β β βββ migrations/
β βββ package.json # Backend dependencies
βββ types/ # TypeScript type definitions
βββ lib/ # Utility functions and helpers
βββ public/ # Static assets
βββ README.md # This file
Via Registry UI (Recommended)
- Start the backend and frontend (see Development Setup below)
- Navigate to the Registry UI at
http://localhost:3000/registry - Find "Nano Banana MCP" or add it:
- Server Type: STDIO Server
- Command:
npx - Arguments:
["-y", "nano-banana-mcp"] - Credentials: Your Gemini API key (starts with
AIza...)
- Get a Gemini API key from Google AI Studio
- Save and start chatting!
Try it in Chat:
- Go to the Chat page
- Select "Nano Banana MCP" from the agent dropdown
- Type: "make me a picture of a kitty"
- The image will appear directly in the chat!
Via Registry UI:
- Navigate to the Registry UI at
http://localhost:3000/registry - Find "Google Maps MCP (Grounding Lite)" and click Edit
- In HTTP Headers (JSON), set:
{"X-Goog-Api-Key":"YOUR_API_KEY_HERE"} - Enable Maps Grounding Lite API in your Google Cloud Console
- Save and start chatting!
For Cline or Claude Desktop:
"google-maps": {
"command": "npx",
"args": ["-y", "@googlemaps/code-assist-mcp@latest"],
"env": {
"GOOGLE_MAPS_API_KEY": "YOUR_API_KEY_HERE"
}
}The orchestrator intelligently routes queries to the right tools without using Gemini quota:
-
Start Kafka:
docker-compose -f docker-compose.kafka.yml up -d .\scripts\setup-kafka-topics.ps1
-
Enable in backend
.env:ENABLE_KAFKA=true KAFKA_BROKERS=localhost:9092
-
Restart backend - You'll see:
[Server] β MCP Matcher started successfully[Server] β Execution Coordinator started successfully[Server] β Result Consumer started successfully
-
Test it:
- Go to Chat page
- Select "Auto-Route (Recommended)"
- Ask: "when is the next iration concert in texas"
- Should route directly to Exa without Gemini!
Check status:
curl http://localhost:3001/api/orchestrator/statusTo browse the web and interact with maps via a real browser:
npx @mcpmessenger/playwright-mcp --installPrereqs
- Node.js β₯ 18
- PostgreSQL (or SQLite for dev)
- npm for backend; pnpm (or npm) for frontend
Backend (API at http://localhost:3001)
cd backend
npm install
cp env.example.txt .env # edit secrets/DB
npm run migrate
npm run seed # seeds all stock servers (Playwright, LangChain, Google Maps MCP, Valuation)
npm startFrontend (http://localhost:3000)
# From project root:
pnpm install
NEXT_PUBLIC_API_URL=http://localhost:3001 pnpm devNote: The .env.local file is automatically created with NEXT_PUBLIC_API_URL=http://localhost:3001 for local development.
Common env (backend)
DATABASE_URL=postgresql://mcp_registry:your_secure_password@localhost:5432/mcp_registry
PORT=3001
CORS_ORIGIN=http://localhost:3000
GOOGLE_GEMINI_API_KEY=
OPENAI_API_KEY=
# Kafka Orchestrator (optional but recommended)
ENABLE_KAFKA=true
KAFKA_BROKERS=localhost:9092
KAFKA_CLIENT_ID=mcp-orchestrator-coordinator
KAFKA_GROUP_ID=mcp-orchestrator-coordinatorThe registry orchestrates between your local machine and Google's cloud-scale tools.
graph LR
User([User]) --> Agent[Cline / Claude]
Agent --> Registry{Registry}
subgraph "Managed (Official)"
Registry --> GMaps[Google Maps Grounding]
Registry --> BQ[BigQuery MCP]
end
subgraph "Local / Hybrid"
Registry --> PW[Playwright-MCP]
Registry --> LC[LangchainMCP]
end
GMaps --- Tool1[Search & Routes]
PW --- Tool2[Browser Automation]
The registry now supports both managed and automated map tools:
- Google Grounding Lite: Access fresh, official geospatial data, real-time weather, and distance matrices. No hallucinationsβjust raw Google Maps data.
- Playwright-MCP Bridge: Need to actually see the map or scrape specific business details? Use the Playwright bridge to automate the Google Maps UI.
β Weather Query Routing Fixed
- Weather queries (e.g., "what's the temperature in Fort Worth Texas") now automatically route to Google Maps
lookup_weathertool - Smart parameter extraction handles both natural language and all-caps queries
- Returns comprehensive weather data: temperature, feels-like, humidity, wind, precipitation probability, and more
- Manual selection: When manually selecting Google Maps agent, the system automatically detects weather queries and uses the correct tool
β Places Search Enhanced
- Place searches (e.g., "find coffee shops in des moines") route to
search_placestool - Improved map links using place IDs for accurate business name display
- Better response formatting with coordinates and direct Google Maps links
Setup Requirements:
- Add
GOOGLE_MAPS_API_KEYtobackend/.env - Enable "Maps Grounding Lite API" in Google Cloud Console
- Run
gcloud beta services mcp enable mapstools.googleapis.com --project=YOUR_PROJECT_ID - Update API key restrictions to allow "Maps Grounding Lite API"
- Restart backend after configuration changes
graph TD
A[Chat UI] -->|/v0.1/servers| B[Registry Backend]
A -->|/v0.1/invoke| B
B -->|routes to| C[MCP Servers (HTTP/STDIO)]
C -->|tools/call result| B --> A
Notes:
- HTTP MCPs can require headers (e.g., Google Maps MCP needs
X-Goog-Api-Keyin the agent's HTTP Headers). - STDIO MCPs (Playwright, LangChain) are spawned by the backend.
- Registry seeds include Playwright, LangChain, and Google Maps MCP; add your own via the Registry UI or publish API.
The backend provides the following key endpoints:
-
Registry API (MCP v0.1 specification):
GET /v0.1/servers- List all registered MCP servers (supports?search=and?capability=query parameters)GET /v0.1/servers/:serverId- Get a specific server by IDPOST /v0.1/publish- Register a new MCP serverPUT /v0.1/servers/:serverId- Update an existing serverDELETE /v0.1/servers/:serverId- Delete a serverPOST /v0.1/invoke- Invoke an MCP tool via backend proxy
-
Audio Transcription:
POST /api/audio/transcribe- Transcribe audio files using Whisper
-
Document Analysis:
POST /api/documents/analyze- Analyze documents (PDFs, images, text) using Gemini Vision
-
Streaming & WebSocket:
GET /api/streams/jobs/:jobId- Get job status via SSEws://localhost:3001/ws- WebSocket for real-time updates
- Kafka: A local Kafka broker powers both the async design pipeline and the intelligent orchestrator. Start it with
docker compose -f docker-compose.kafka.yml up -d, which spins up Zookeeper and Kafka. Shut it down withdocker compose -f docker-compose.kafka.yml down. - Orchestrator Topics:
user-requests: Normalized user queries from the Ingress Gatewaytool-signals: High-confidence tool matches from the MCP Matcherorchestrator-plans: Gemini fallback plans (when matcher can't resolve)orchestrator-results: Final tool execution results
- Design Pipeline Topics:
design-requestsreceivesDESIGN_REQUEST_RECEIVEDevents;design-readycarriesDESIGN_READY/DESIGN_FAILEDresults. - Orchestrator Flow:
- Frontend calls
POST /api/orchestrator/querywith user query - Ingress Gateway normalizes query and publishes to
user-requests - MCP Matcher performs fast keyword/semantic matching (<50ms)
- Execution Coordinator invokes matched tool and publishes result
- Query route receives result via shared result consumer
- Response returned to frontend
- Frontend calls
- Backend Flow:
POST /api/mcp/tools/generatequeues a request (publishesDESIGN_REQUEST_RECEIVED), a multimodal worker consumes it, and the backend pushes progress/completions through its WebSocket (ws://localhost:3001/ws). - WebSocket Testing: You can watch jobs with
npx wscat -c ws://localhost:3001/wsand send{ "type": "subscribe", "jobId": "<id>" }to receivejob_statusupdates and the resulting SVG payload.
By wiring Kafka to the Prisma-backed backend we now preserve a responsive frontend while heavy LLM work happens asynchronously in the background, and intelligent routing bypasses Gemini for high-signal queries.
The backend now ships with PostgreSQL (recommended for production) plus a Prisma Memory model that persists conversation history, tool invocations, and memories. The memory.service.ts exposes helpers such as searchHistory, storeMemory, and getMemories, while the invoke endpoint exposes a search_history tool so agents can look up relevant context before responding. Keep your Postgres container running (or point DATABASE_URL at a managed instance) to retain agent state between restarts.
Prisma automatically maintains migration history in backend/prisma/migrations; rerun npx prisma migrate dev whenever you change the schema.
Comprehensive documentation is available in the docs/ directory:
- Development Guide - Complete setup and development workflow
- API Documentation - Complete API reference
- Deployment Guide - Production deployment instructions
- Event-Driven Architecture - Kafka and event processing
- Kafka Setup Guide - How to set up Kafka locally
- Kafka Orchestrator - Orchestrator architecture and implementation
- Testing Guide - One-click installation feature testing
- Strategic Roadmap - Long-term vision and strategy
Both frontend and backend can be developed independently:
- Frontend: Runs on port 3000 (default Next.js port)
- Backend: Runs on port 3001 (configurable via environment variables)
See the Development Guide for detailed instructions.
For production deployment, see the Deployment Guide.
Quick Summary:
- Frontend: Deploy to Vercel (auto-deploys on push to main)
- Backend: Deploy to GCP Cloud Run using Artifact Registry
- Database: Cloud SQL (PostgreSQL) or SQLite for development
- Event Bus: Confluent Cloud (Kafka) or Cloud Pub/Sub (optional)
Backend Deployment (Cloud Run):
cd backend
cp Dockerfile.debian Dockerfile
gcloud builds submit --tag us-central1-docker.pkg.dev/PROJECT_ID/mcp-registry/mcp-registry-backend --region us-central1 .
gcloud run deploy mcp-registry-backend \
--image us-central1-docker.pkg.dev/PROJECT_ID/mcp-registry/mcp-registry-backend \
--platform managed \
--region us-central1 \
--allow-unauthenticated
rm DockerfileFrontend Deployment (Vercel):
- Auto-deploys on push to main branch
- Or manually:
vercel --prod
- mcp-registry: Central discovery for all official and community servers.
- playwright-mcp: Full browser capabilities (Chromium) for your agent.
- LangchainMCP: Bridging MCP tools into production LangGraph/LangChain workflows.
Frontend
- Next.js 16 - React framework
- React 19 - UI library
- TypeScript - Type safety
- Tailwind CSS - Styling
- Radix UI - Accessible component primitives
- shadcn/ui - UI component library
Backend
- Express.js 5 - Web framework
- Prisma - ORM and database toolkit
- TypeScript - Type safety
- PostgreSQL - Database (production), SQLite (development)
- Google Gemini API - AI-powered SVG generation and document analysis
- Google Vision API - Image analysis capabilities
- OpenAI Whisper API - Voice-to-text transcription
- Apache Kafka - Event-driven architecture for async processing and intelligent orchestration
- Server-Sent Events (SSE) - Real-time progress streaming
- WebSocket - Bidirectional communication
- ts-node - TypeScript execution
- Multer - File upload handling
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the ISC License - see the LICENSE file for details.
Here's how to use the image generation feature:
-
Register Nano Banana MCP (if not already registered):
- Server Type: STDIO Server
- Command:
npx - Arguments:
["-y", "nano-banana-mcp"] - Credentials: Your Gemini API key
-
Generate an Image:
- Go to Chat page
- Select "Nano Banana MCP" or use "Auto-Route"
- Type: "make me a picture of a kitty"
- Wait a few seconds...
- Image appears in chat! π
-
Features:
- Works with both HTTP and STDIO MCP servers
- Automatic tool discovery
- Handles large images (converts base64 to blob URLs)
- Shows quota errors with helpful messages
- Supports both synchronous and asynchronous generation
- Check browser console for errors
- Verify API key is set correctly in registry
- Ensure billing is enabled for Gemini API (free tier has limited quotas)
- Check network tab for 414 errors (should be fixed with blob URL conversion)
- Get a new key from Google AI Studio
- Keys must start with
AIza... - Update via Registry UI or API
- See docs/HOW_TO_GET_GEMINI_API_KEY.md for details
- HTTP servers: One-click installation only works for STDIO servers (with commands). HTTP servers require manual configuration via the generic install dialog.
- Missing command: Ensure the server has a
commandfield configured (e.g.,npx,node,python) - Cursor not opening: Verify Cursor is installed and the browser allows deep-link navigation
- Clipboard not working: Ensure you're testing on
http://localhost(clipboard API requires secure context)
- Verify command and arguments are correct
- Check environment variables are set
- Look at backend logs for initialization errors
- Ensure
npxis available in the container
- Check Kafka is running:
docker psshould showzookeeperandkafkacontainers - Verify topics exist: Run
.\scripts\setup-kafka-topics.ps1if topics are missing - Check backend logs: Look for
[Server] β MCP Matcher started successfullyand[Server] β Execution Coordinator started successfully - Verify environment variables: Ensure
ENABLE_KAFKA=trueandKAFKA_BROKERS=localhost:9092inbackend/.env - Check status endpoint:
curl http://localhost:3001/api/orchestrator/statusshould show all services astrue - Timeout issues: If queries timeout, check that Result Consumer is running (should see
[Server] β Result Consumer started successfully) - SSE parsing errors: Make sure backend has been restarted after the SSE parsing fix was applied
- Weather queries not routing: After updating matcher patterns, restart the backend to pick up changes. Weather queries should route to Google Maps
lookup_weathertool automatically
For issues, questions, or contributions, please open an issue on the GitHub repository.
Built with β€οΈ for the MCP community