MCP Server Registry #159
Replies: 26 comments 46 replies
-
Thanks for writing this up! I'm increasingly feeling like this is the right path forward too. Before, I had been reluctant for us to undertake this because it felt like building npm/pypi from scratch—but actually, if we assume the continued use of package managers' own registries, we can make this strictly a metadata service about servers. That dials the complexity and security risk way down. We could also integrate something like socket.dev to provide some basic level of security assessment about the underlying packages. Does it sound right that we should require packages to be published to some underlying registry first (npm, pypi, Docker), and then they should be explicitly submitted/registered with the metadata registry afterward? |
Beta Was this translation helpful? Give feedback.
-
I agree with the opportunity here, namely to create a "single source of truth" of all the MCP servers out there and deduplicate all the effort being made to identify and collate what's been built. Definitely agree on having a global, public API for providing access to this bedrock data. Also agree that reimplementing npm/pypi is out of scope -- leave the source code hosting to those solutions. I also like the idea of a base "Server Browser" implementation, with allowing the market to potentially improve on the "server discovery UX" by implementing their own take on top of the global, public API. I'm not as sold on "curation and segmentation", "security", or "unified runtime" as something that should definitely be solved within the registry. I think this could potentially be separated out and be concerns tackled by third party "Server Browsers" - of which the native official "Server Browser" is just a simple implementation that does not offer much in the way of opinionated tags or security guarantees. But maybe we could take these on a case by case basis after an initial official registry exists.
I think this should be true for open source packages that are meant to run on a local machine. But I think this "centralized registry" solution should also accept closed source, SSE server submissions. |
Beta Was this translation helpful? Give feedback.
-
My recommendation would be that there be a standard json/yaml specification and or configuration file that is used that is implementation agnostic (Think terraform). That standardized spec/config is converted by MCP server implementations into a functional service. Industry originating specs/ configs could then be published for deployment on any MCP server service package Its really inhibitive for each server developer maintain multiple versions that are not readily extensible and for developers / service deployers to not have this sort of modular approach. This approach calls for the following:
In addition to the value added circumstances that everyone mentions above, this approach could stream line adoption and get us out of a proprietary / snowflake solution trajectory (which is where it seems this could go). |
Beta Was this translation helpful? Give feedback.
-
+1 to have an official place where people can submit their servers, clients etc. |
Beta Was this translation helpful? Give feedback.
-
All good responses here. Thanks for the thoughtful commentary and ideas @tadasant! I'm pretty aligned and look forward to figuring out a more specific plan soon. I'll be on vacation a few days beginning tomorrow, but back full time from next Wednesday. |
Beta Was this translation helpful? Give feedback.
-
Maintainer of Smithery AI here! I would definitely want to be kept up to date about MCP's official plans as Smithery currently offers both hosting and registry of MCPs. Open to discussing any potential integrations/collaborations that might deduplicate work. Our registry API: https://smithery.ai/docs/registry |
Beta Was this translation helpful? Give feedback.
-
I believe the issue of registry fragmentation of having different registries of various levels of support and awareness is real. I'd also suggest that this might be duplicative of the discoverability topic. Though to offer a differing opinion to consider here... Registries are expensive to build and maintain with very little hope for revenue gen (I know this from experience and from discussions with NPM founders), this is what makes them hard to continue maintaining at scale. Propagation of MCP servers are not remotely to the point of "scaled" where this will be a challenge. So perhaps the "registry fragmentation" problem is a problem that will naturally go away unless someone can figure out a path to monetize the ability to support these at scale. To put it directly, I fully expect these registries to fall away as expenses build (though I do wish for everyone's efforts and ventures to be successful). So my suggestion would be not to create a cloud service for this discoverability but to lean into systems like GitHub, NPM, etc. to provide a way to openly capture, list, and publish updates to this list - take that as far as possible. Those that need it are technical teams and can easily sync from there. That should provide a long-term viable way to support these systems even as the lists start to hit a level of actual scale. I also believe there are differing needs for MCP servers and I do not believe that, architecturally, MCP clients should have the breadth of all MCP servers available to use at the ready on disk. While a registry that can be stable and cost-effective for clients to sync to is a good initial solution, we will need to consider how to determine the context directories that always need to be available and what should be sourced on demand. For example, if I create an MCP server for a local service in town, I don't expect anyone to need that constant knowledge of its existence. I expect that leaning into the web as the on-demand system for that would be ideal. What happens when there are 100k MCP servers, 10 million, 100m, more? Then the multiple of times that content needs to sync? Not suggesting we build for that scale now but preparing for the ideal outcome that this pattern takes hold means expecting these numbers. |
Beta Was this translation helpful? Give feedback.
-
Have we explored existing standard formats like OCI? This can, out of the box, allow servers to be adopted by existing container registries (e.g., Quay, Dockerhub, ECR, and Github container registries). I.e., one standard vs. one registry. |
Beta Was this translation helpful? Give feedback.
-
I believe the biggest directory site is still https://mcp.so |
Beta Was this translation helpful? Give feedback.
-
There are two categories of use cases here, and I think much of the discussion is missing that critical distinction (@sean-roberts points out the issue). The problem is likely because we're all developers, in many cases using MCP to build tools for ourselves where we know exactly which MCP servers we want to install (if only we had a registry). But we're not the only target audience for MCP; in fact, I'd argue we are the least important (at least in terms of audience size). MCP, by itself, solves the problem of how to enrich a user agent with structured interaction capabilities, with a very precise level of control as opposed to the more general agents.json-esque approach. There's real value in exposing structured and secured interactions, regardless of enterprise or consumer use case. But unlike MCP servers for enterprise scenarios like software development, the interactions in most consumer use cases will depend almost entirely on the user's browsing context (how many apps do most people actually install on their devices?), and it's impossible to maintain a registry of all of those interactions. For example, imagine a world in which every WordPress site offers an MCP server for commenting on blog posts (perhaps as a site-specific link to a multitenant MCP server hosted by Automattic's Jetpack service, for example). Yahoo! tried to build a portal for the entire web in the '90s, but Google had the right idea: present search results using metadata ( tags, in this case) that are published by the websites themselves and that any search engine could use. For enterprise scenarios, by all means, define a registry protocol and let different registries try to establish themselves. I'm sure GitHub Enterprise, Azure DevOps, and the rest will be interested in building a package feed for MCP servers according to whatever standard you choose. WASM as a common target makes sense to me, for what it's worth. Also, insisting on open-source isn't likely to be as helpful as people think—the xz Utils backdoor for example was fully open-source. However, for the everyday AI and consumer scenarios, Anthropic and the MCP community need to embrace the Google philosophy of relying on the open web. I don't know enough to state which approach is best, and there are several competing would-be standards vying for attention (agents.json being one recurring theme), but discovery of arbitrary servers and APIs (which may not have an MCP server defined) via structured AI-oriented metadata—and then progressively enhancing those connections with MCP when it's available—is a far more impactful strategy in the consumer space. EDIT: I should clarify that I'm both saying more focus needs to be placed on the discovery discussion and also that restricting MCP servers to being open-source, or in some language or other, is self-defeating when dealing with a world where every website potentially comes with one or several MCP servers. Registries are helpful for enterprise/developer local uses, but not for the broader world of consumer AI. |
Beta Was this translation helpful? Give feedback.
-
For technical or dev use, a npmjs like + CLI should be the key. Single point of truth with good version control is beneficial. |
Beta Was this translation helpful? Give feedback.
-
Proposal: Service Registration & Dynamic Push Mechanism for Model Context Protocol (MCP)What is it? (Concept Overview)This proposal introduces a Service Registration Hub to MCP, enabling dynamic discovery and management of MCP servers via Server-Sent Events (SSE). Architecture (Text Diagram):
Why is it like that? (Design Rationale)1. Centralized Service Discovery
2. Real-Time Updates via SSE
3. Simplified Health Management
4. Single-Node Implementation
How does it work? (Technical Implementation)1. Service Registration Flow
2. SSE Subscription for Clients// Client subscribes to service updates
const eventSource = new EventSource('/sse?service=mcp');
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
updateServiceList(data); // Update local server list
}; 3. Heartbeat & Health Checks
4. Client-Side Adaptation
Design Philosophy (Design Philosophy)
Value Proposition
Next Steps
GitHub Profile: aliyun1024qjc |
Beta Was this translation helpful? Give feedback.
-
We are discussing whether this is a server directory similar to Docker Hub or a registration center that supports hosts in dynamically discovering servers. Personally, I believe an authoritative server directory is very useful, but a unified registration center is not very reasonable. However, the dynamic registration and discovery of servers can be part of the MCP protocol specification, used to guide the construction of privately deployed MCP clusters. |
Beta Was this translation helpful? Give feedback.
-
A couple points I'd like to make: A grand central registry, vs a registry protocol/specComing from the enterprise world, I believe that you're going to want to support multiple "feeds" (so that an IT admin can provide a list of known and vetted MCPs), but (a) standardize what a 'feed' looks like so that apps can build marketplaces/MCP pickers on top of feeds, and (b) provide an 'official' feed that the majority of MCP makers will publish their work to and that the majority of non-enterprise users will use by default. I believe @LarsKemmann was also suggesting something along these lines. Trying to 'own' the only registry in the world would both create a very attackable single point of failure and alienate most of the enterprise industry. Local vs Remote MCP serversI feel that the majority of this technical discussion is only complicated because MCP servers are currently all local and therefore there needs to be a story for somehow getting the server onto the user's local machine and running on whatever platform they use. If you shelve that discussion for the time being, all you really need for a registry of remote MCP servers is something like a feed of endpoint URLs and descriptions. And I suspect remote servers are going to be the preferred tech for web services, for a variety of reasons. I'm not sure how many legitimate use cases there are where a local MCP server is the better choice - basically things like local filesystem access, local access to settings, screen contents and hardware (cursor, sound, network etc). Whereas there are hundreds/thousands of websites/web services that will potentially want to integrate with LLMs and be better served by remote MCP since those will be better able to integrate with both web and desktop based LLM apps. This seems like it would be much simpler/cheaper to design/build/maintain, and could be stood up pretty quickly (once remote MCP's become a reality). Local MCP servers could even be split out into their own separate registry/package manager that you build at some later date - I don't see any real benefit to combining the two. |
Beta Was this translation helpful? Give feedback.
-
Hi Everyone, Agreed on the remote MCP servers. Is there a way to be sure an MCP server is secure, by the way? Some are asking for API keys, while we don’t really know who has published the MCP server (even in the case of common Google APIs, for instance). Great to hear from the community on this as well Best, Guillaume |
Beta Was this translation helpful? Give feedback.
-
How about taking a decentralized approach similar to the Fediverse? Since MCP is a protocol, centralizing its index would just create a single point of failure, defeating many of its advantages. It seems unwise to put the future of MCP in the hands of just one administrator. |
Beta Was this translation helpful? Give feedback.
-
What about just using the github url for the source of MCPs like golang and have the make a registry of MCPs like pkg.go.dev? # golang
go get -v github.com/modelcontextprotocol/example-app
# mcp style?
mcp get github.com/modelcontextprotocol/example-app I was actually building something similar to MCP(hyperpocket) with my team and moving on to the idea for extending MCP cause you guys seem to become the standard now. But sharing the idea and the pain-points that we had, one of them was a debate to have a global registry or just go with github url to pull everything that you can launch - like golang package system. We've chosen just to use github url and make them searchable on github somehow - in future, we thought about separating just the registry part like https://pkg.go.dev/, but the sources remaining on github. I'm sharing our example code source from hyperpocket_langgraph import PocketLanggraph
from langgraph import AgentGraph
# Load tools with Hyperpocket
pocket = PocketLanggraph(tools=[
"https://github.com/vessl-ai/hyperpocket/tree/main/tools/slack/get-messages",
"https://github.com/vessl-ai/hyperpocket/tree/main/tools/github/list-pull-requests",
])
tool_nodes = pocket.get_tool_node(should_interrupt=True)
# Define the LangGraph workflow
graph = AgentGraph()
graph.add_node("schedule_message", tool_nodes)
graph.connect("start", "schedule_message")
graph.connect("schedule_message", "end")
# Execute the workflow
graph.execute({"channel": "general", "message": "Team meeting at 3 PM"}) P.S. About the need of not just tool prototol interface but unified "execution interface" If you go with the pkg.go.dev style, you might want to consider this interface too. To achieve that, We've made a {
"tool": {
"name": "slack_get_messages",
"description": "get slack messages",
"inputSchema": {
"properties": {
"channel": {
"title": "Channel",
"type": "string"
},
"limit": {
"title": "Limit",
"type": "integer"
}
},
"required": [
"channel",
"limit"
],
"title": "SlackGetMessageRequest",
"type": "object"
}
},
"auth": {
"auth_provider": "slack",
"scopes": ["channels:history"]
},
"language": "python",
"entrypoint": {
"build": "pip install .",
"run": "python -m get_message"
}
} |
Beta Was this translation helpful? Give feedback.
-
What I would request on the metadata part of the registry API is something similar to config json in Smithery: https://smithery.ai/docs/config Where non technical users don't have to understand args, env vars but just put their secrets in a UI form with textboxes. So it will be easier for broader audience to adopt. |
Beta Was this translation helpful? Give feedback.
-
In this regard, I am currently conducting a security assessment if you have
projects to submit in order to contribute to the standardization of good
security practices for the MCP. It's fascinating to see how we jump to new
protocols without going through the standardization process.but business
reasons always take precedence.
Have a nice sunday for all
Le dim. 13 avr. 2025 à 03:10, Raphael Kieling ***@***.***> a
écrit :
… I liked this idea. I was sketching something to build for my internal team
and randomly came across this thread. In my opinion, there are a few ways
to approach this—especially when thinking about companies with multiple
areas or teams:
Just some random ideas that i was thinking, feel that is a bit related to
your "enterprise" topic
1 - MCP Builder:
Each team can create its own MCPs and tools, being responsible for
handling the incoming/outgoing requests. That means exposing endpoints like
/sse or /message to clients and translating MCP Client requests to REST
APIs.
Would be literally a builder, create the translation and make 100% easier
to the teams to expose their existing APIs without touch in any MCP code.
The downside is that this service would also be responsible for managing
all the stateful MCP Servers. Hopefully, the new Streamable HTTP proposal
will be released soon to make that easier.
2 - MCP Discovery:
Teams can register their MCPs along with metadata like scope, authentication
type (some services require specific auth and the LLM gateway for a given
area might not have access), repository , etc.
This would allow us to expose a proxy that targets the desired MCP. While
returning the raw endpoint is possible, having a unified way to access them
is more convenient and could hide a private VPN behind it.
For example:
•GET /discovery?area=AI_TECH&authentication=INTERNAL_ADMIN: Returns only
MCPs from the AI_TECH area that also support the INTERNAL_ADMIN auth scope.
•GET/POST /discovery/{mcpId}/proxy/**: Exposes all endpoints like /sse,
/message, /., this is the endpoint that the mcp client would use.
—
Reply to this email directly, view it on GitHub
<#159 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADZCZ3BKDIDFVGW3DCPDIF32ZG2RBAVCNFSM6AAAAABV6PEGTKVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTEOBRGYZDONA>
.
You are receiving this because you are subscribed to this thread.Message
ID: <modelcontextprotocol/.github/repo-discussions/159/comments/12816274@
github.com>
|
Beta Was this translation helpful? Give feedback.
-
I propose a design for the MCP Server Registry (MSR)。 Principles:
Arch:
|
Beta Was this translation helpful? Give feedback.
-
I'm not advocating for any vendor or tech but docker just announced https://hub.docker.com/catalogs/mcp and https://www.docker.com/blog/introducing-docker-mcp-catalog-and-toolkit/ and given both private and public registries are std, why this be off docker ? |
Beta Was this translation helpful? Give feedback.
-
Züs-MCP Registry ProposalIt's simple. Any MCP service provider can get permission-less registry and verifiable setup, by using Züs as their storage backend. Since data is at the heart of MCP services, Züs not only stores data securely, but also acts as a built-in registry for service providers, with attributes like provenance and bulletproof security—check out Blimp.software. Züs is an open network, allowing you to self-host storage servers and deliver essentially an on-prem solution. Here’s the trick:
Since a company can have multiple wallets, and each wallet can handle multiple allocations, it’s a scalable, decentralized, and permission-less system. No need for extra infrastructure or jumping through hoops—you just store your MCP data on Züs and you’re part of the system. With Blimp.software (to quickly set up wallets and allocations and view data), Zs3server (to work with S3-compatible storage), and Atlus.cloud (to explore the blockchain), everything’s secure, verifiable, and self-sovereign—no central authority needed. Plus, any third-party registry can curate a list of MCP servers straight from the blockchain, filtering by industry, region, or whatever they want. The Building Blocks1. Blimp.software
2. MCP Server
3. Zs3server
4. Atlus.cloud
How It All FlowsStep 1: Set Up Your Wallet & Allocation
Step 2: Store Your MCP Data
Step 3: Get Verified
Why Use Züs for MCP?
Comparison Table
DiagramRefer to the attached diagram for the visual representation of the architecture. |
Beta Was this translation helpful? Give feedback.
-
I'd like to contribute to this discussion with a solution we've been developing that could address the registry needs outlined here. The MCP Server Manifest ConceptIn our initial proposal, we outlined a standardized approach for MCP server management using a decentralized yet organized system through This approach is inspired by both npm's package.json and ESM's URL-based imports, allowing MCP servers to be described consistently while remaining decentralized. The manifest contains standardized metadata including:
The manifest concept creates a middle ground between completely centralized repositories and the current fragmented landscape. MCPBar: A Reference ImplementationWe've recently launched MCPBar, a reference implementation of this concept. MCPBar is a CLI tool that uses the standardized manifest format to simplify discovery, installation, and management of MCP servers across different clients. MCPBar demonstrates how a registry based on this manifest concept can work in practice, with features like server search, simple installation, and standardized configuration. Example manifest:{
"name": "github",
"version": "1.0.0",
"description": "GitHub MCP Server for AI tools using Model Context Protocol.",
"homepage": "https://github.com/github/github-mcp-server",
"repository": {
"type": "git",
"url": "https://github.com/github/github-mcp-server.git"
},
"license": "MIT",
"keywords": ["mcp", "ai", "github"],
"inputs": [
{
"id": "github_token",
"type": "promptString",
"description": "GitHub Personal Access Token",
"password": true
}
],
"server": {
"command": "docker",
"args": [
"run", "-i", "--rm", "-e", "GITHUB_PERSONAL_ACCESS_TOKEN",
"ghcr.io/github/github-mcp-server"
],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "${input:github_token}"
}
}
} Usage:# Install an MCP server
mcpbar install github/github-mcp-server
# Search for servers
mcpbar search github This approach gives us the best of both worlds - decentralized publication but standardized discovery and installation. MCPBar could serve as the foundation for a standardized registry that addresses the requirements outlined in this discussion. We'd love to get feedback from the community and potentially align our efforts with the official MCP direction. Full details are available on our blog and GitHub repository. |
Beta Was this translation helpful? Give feedback.
-
A first great starting point would be an official spec how an MCP API can be described in a single, self contained JSON file, including some info about the server and how to connect. Similar to OpenAPI for REST APIs. I would assume that one server can potentially also host multiple MCP APIs. From there, you can put a thinner publishing / discovery / federation layer on top. Concretely: How you can provide those JSON file(s), how a bigger registry would make the information available via a convenient API and (later maybe) how registries can sync the metadata between them efficiently, without creating conflicts. |
Beta Was this translation helpful? Give feedback.
-
Thank you all for the engagement and for exploring so many different approaches in this discussion! There is an in progress API implementation for an official server registry here now: https://github.com/modelcontextprotocol/registry |
Beta Was this translation helpful? Give feedback.
-
Just providing an alternative viewpoint here: While I agree with this concept in principle - new standards, centralized repositories, and single sources of truth very rarely succeed in practice. We already have established patterns for how users interact with new products and services on the internet: the World Wide Web supported by search engines, online forums, third-party resources, wikis, advertising, etc. This ecosystem works well precisely because it accommodates the complexities and differences between applications. Trying to conform them to a single standard of governance is going to be virtually impossible, and in some ways even restricts the explorability & potential for LLMs & AI Agents to engage with the ecosystem ("sorry you can't use that app its not a part of the registry"). These existing patterns already effectively guide users to finding websites, tools, and platforms, and it will do the same with discoverability of MCP servers & AI agents. This proposal feels reminiscent of the early internet where we had websites would simply provide lists of resources—an approach we've since evolved beyond because once the internet grew beyond 10 websites, half of which were dancing cats, it was no longer feasible to maintain. Our existing decentralized approach is more flexible, sustainable, and allows for organic growth and innovation, which will likely be severely hampered by a centralized, constrained governance approach. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Pre-submission Checklist
Discussion Topic
The purpose of this is to sketch a baseline set of required functionality for a public industry standard registry of MCP Servers. A number of sites have popped up in recent months. For example:
https://mcpserver.cloud/
https://mcp.run/
https://smithery.ai/
https://block.github.io/goose/v1/extensions/
While there is value in different server browsers and client integrations for MCP, there will be additional value in a “single-source-of-truth” registry containing the metadata about MCP servers themselves. Right now each of these sites has its own copy of data, relying on additions by maintainers or contributors. They each present a subset of all available MCP servers globally, and duplicate much of the storage and search logic. Ultimately this presents a fragmented view of what is available to end-users.
In contrast, a single widely adopted registry will be a bedrock resource that higher level tools interfacing with MCP servers can leverage.
Feature Requirements
Global Public API
We need a robust API serving metadata about every server, as well as artifact download URIs, search functionality (via utility and categories), new server publishing, storage, tagging, versioning, etc.
This will allow multiple server browsers / client install flows to emerge, while maintaining and deriving the benefits of a single source of all metadata.
Server Browser
Similar to https://www.npmjs.com/ we should have a standard server browser that implements and exposes a UX for these feature requirements. This is not to say we will discourage other browsers, but that to pair with the global public API there should be at least one officially maintained server browser.
Curation and Segmentation
There should be support in the API and UX for browsing MCP servers of notable utility (popular, most installed, new this week) as well as specific categories for the services they connect to (finance tools, fitness, wellness, etc).
Security
Security should be taken as a first class consideration in the registry. We should implement automated code scanning looking for traditional CVE (common vulnerabilities & exposures) as well as analysis specific to MCP servers (adherence to authorization spec, scanning for prompt injection, etc) that will become more clear over time. The global public API should also be protected against publishing / DDoS attacks.
Further Exploration
Unified Runtime
We could explore a unified runtime for MCP servers (a la npx) that would work for MCP servers written in any language. This would simplify the installation and usage flow for client integrators.
Beta Was this translation helpful? Give feedback.
All reactions