Skip to content

Commit 099075b

Browse files
authored
Merge branch 'main' into localden/contrib
2 parents b113c6d + 6163a0b commit 099075b

40 files changed

+2615
-13290
lines changed

.github/CODEOWNERS

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# CODEOWNERS file for MCP Specification repository
22

3-
# General documentation ownership - @modelcontextprotocol/docs-maintaners owns all of /docs
4-
/docs/ @modelcontextprotocol/docs-maintaners
3+
# General documentation ownership - @modelcontextprotocol/docs-maintaners and core-maintainers own all of /docs
4+
/docs/ @modelcontextprotocol/docs-maintaners @modelcontextprotocol/core-maintainers
55

66
# Specific ownership - @core-maintainer team owns docs/specification and schema/ directories
77
/docs/specification/ @modelcontextprotocol/core-maintainers

MAINTAINERS.md

Lines changed: 13 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
This document lists current maintainers in the Model Context Protocol project.
44

5-
**Last updated:** August 17, 2025
5+
**Last updated:** October 15, 2025
66

77
## Lead Maintainers
88

@@ -84,14 +84,16 @@ This document lists current maintainers in the Model Context Protocol project.
8484

8585
### Inspector
8686

87-
- [Ola Hungerford](https://github.com/olaservo)
8887
- [Cliff Hall](https://github.com/cliffhall)
88+
- [Konstantin Konstantinov](https://github.com/KKonstantinov)
89+
- [Ola Hungerford](https://github.com/olaservo)
8990

9091
### Registry
9192

9293
- [Toby Padilla](https://github.com/toby)
9394
- [Tadas Antanavicius](https://github.com/tadasant)
9495
- [Adam Jones](https://github.com/domdomegg)
96+
- [Radoslav (Rado) Dimitrov](https://github.com/rdimitrov)
9597

9698
### Reference Servers
9799

@@ -127,9 +129,15 @@ This document lists current maintainers in the Model Context Protocol project.
127129

128130
### Client Implementor Interest Group
129131

130-
- [Michael Feldstein](https://github.com/msfeldstein)
131-
- [Harald Kirschner](https://github.com/digitarald)
132-
- [Connor Peet](https://github.com/connor4312)
132+
**Note:** These individuals serve as MCP protocol representatives for their respective clients. For client-specific issues, use the official support channels provided by each product.
133+
134+
- [Alex Hancock](https://github.com/alexhancock) - Goose
135+
- [Ben Brandt](https://github.com/benbrandt) - Zed
136+
- [Connor Peet](https://github.com/connor4312) - VS Code
137+
- [Gabriel Peal](https://github.com/gpeal) - Codex
138+
- [Jun Han](https://github.com/formulahendry) - GitHub Copilot for JetBrains
139+
- [Tyler Leonhardt](https://github.com/TylerLeonhardt) - VS Code
140+
- [Michael Feldstein](https://github.com/msfeldstein) - Cursor
133141

134142
### Financial Services Interest Group
135143

README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,10 @@ compatibility.
1515
The official MCP documentation is built using Mintlify and available at
1616
[modelcontextprotocol.io](https://modelcontextprotocol.io).
1717

18+
## Authors
19+
20+
The Model Context Protocol was created by David Soria Parra ([@dsp](https://github.com/dsp)) and Justin Spahr-Summers ([@jspahrsummers](https://github.com/jspahrsummers)).
21+
1822
## Contributing
1923

2024
See [CONTRIBUTING.md](./CONTRIBUTING.md).
Lines changed: 226 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,226 @@
1+
+++
2+
date = '2025-11-03T00:00:00+00:00'
3+
publishDate = '2025-11-03T00:00:00+00:00'
4+
draft = false
5+
title = 'Server Instructions: Giving LLMs a user manual for your server'
6+
author = 'Ola Hungerford (Maintainer)'
7+
tags = ['automation', 'mcp', 'server instructions', 'tools']
8+
+++
9+
10+
Many of us are still exploring the nooks and crannies of MCP and learning how to best use the building blocks of the protocol to enhance agents and applications. Some features, like [Prompts](https://blog.modelcontextprotocol.io/posts/2025-07-29-prompts-for-automation/), are frequently implemented and used within the MCP ecosystem. Others may appear a bit more obscure but have a lot of influence on how well an agent can interact with an MCP server. **Server instructions** fall in the latter category.
11+
12+
## The Problem
13+
14+
Imagine you're a Large Language Model (LLM) who just got handed a collection of tools from a database server, a file system server, and a notification server to complete a task. They might have already been carefully pre-selected or they might be more like what my workbench looks like in my garage - a mishmash of recently-used tools.
15+
16+
Now let's say that the developer of the database server has pre-existing knowledge or preferences about how to best use their tools, as well as more background information about the underlying systems that power them.
17+
18+
Some examples could include:
19+
20+
- "Always use `validate_schema``create_backup``migrate_schema` for safe database migrations"
21+
- "When using the `export_data` tool, the file system server's `write_file` tool is required for storing local copies"
22+
- "Database connection tools are rate limited to 10 requests per minute"
23+
- "If `create_backup` fails, check if the notification server is connected before attempting to send alerts"
24+
- "Only use `request_preferences` to ask the user for settings if elicitation is supported. Otherwise, fall back to using default configuration"
25+
26+
So now our question becomes: what's the most effective way to share this contextual knowledge?
27+
28+
## Solutions
29+
30+
One solution could be to include extra information in every tool description or prompt provided by the server. Going back to the physical tool analogy, however: you can only depend on "labeling" each tool if there is enough space to describe them. A model's context window is limited - there's only so much information you can fit into that space. Even if all those labels can fit within your model's context window, the more tokens you cram into that space, the more challenging it becomes for models to follow them all.
31+
32+
Alternatively, relying on prompts to give common instructions means that:
33+
34+
- The prompt always needs to be selected by the user, and
35+
- The instructions are more likely to get lost in the shuffle of other messages.
36+
37+
It's like having a pile of notes on my garage workbench, each trying to explain how different tools relate to each other. While you might find the right combination of notes, you'd rather have a single, clear manual that explains how everything works together.
38+
39+
Similarly, for global instructions that you want the LLM to follow, it's best to inject them into the model's system prompt instead of including them in multiple tool descriptions or standalone prompts.
40+
41+
This is where **server instructions** come in. Server instructions give the server a way to inject information that the LLM should always read in order to understand how to use the server - independent of individual prompts, tools, or messages.
42+
43+
### A Note on Implementation Variability
44+
45+
Because server instructions may be injected into the system prompt, they should be written with caution and diligence. No instructions are better than poorly written instructions.
46+
47+
Additionally, the exact way that the MCP host uses server instructions is up to the implementer, so it's not always guaranteed that they will be injected into the system prompt. It's always recommended to evaluate a client's behavior with your server and its tools before relying on this functionality.
48+
49+
We will get deeper into both of these considerations with concrete examples.
50+
51+
## Real-World Example: Optimizing GitHub PR Reviews
52+
53+
I tested server instructions using the official [GitHub MCP server](https://github.com/github/github-mcp-server) to see if they could improve how models handle complex workflows. Even with advanced features like toolsets, models may struggle to consistently follow optimal multi-step patterns without explicit guidance.
54+
55+
### The Problem: Detailed Pull Request Reviews
56+
57+
One common use case where I thought instructions could be helpful is when asking an LLM to "Review pull request #123." Without more guidance, a model might decide to over-simplify and use the `create_and_submit_pull_request_review` tool to add all review feedback in a single comment. This isn't as helpful as leaving multiple inline comments for a detailed code review.
58+
59+
### The Solution: Workflow-Aware Instructions
60+
61+
One solution I tested with the GitHub MCP server is to add instructions based on enabled toolsets. My hypothesis was that this would improve the consistency of workflows across models while still ensuring that I was only loading relevant instructions for the tools I wanted to use. Here is an example of what I added if the `pull_requests` toolset is enabled:
62+
63+
```go
64+
func GenerateInstructions(enabledToolsets []string) string {
65+
var instructions []string
66+
67+
// Universal context management - always present
68+
baseInstruction := "GitHub API responses can overflow context windows. Strategy: 1) Always prefer 'search_*' tools over 'list_*' tools when possible, 2) Process large datasets in batches of 5-10 items, 3) For summarization tasks, fetch minimal data first, then drill down into specifics."
69+
70+
// Only load instructions for enabled toolsets to minimize context usage
71+
if contains(enabledToolsets, "pull_requests") {
72+
instructions = append(instructions, "PR review workflow: Always use 'create_pending_pull_request_review' → 'add_comment_to_pending_review' → 'submit_pending_pull_request_review' for complex reviews with line-specific comments.")
73+
}
74+
75+
return strings.Join(append([]string{baseInstruction}, instructions...), " ")
76+
}
77+
```
78+
79+
After implementing these instructions, I wanted to test whether they actually improved model behavior in practice.
80+
81+
### Measuring Effectiveness: Quantitative Results
82+
83+
To validate the impact of server instructions, I ran a simple controlled evaluation in Visual Studio Code comparing model behavior with and without the PR review workflow instruction. Using 40 GitHub PR review sessions on the same set of code changes, I measured whether models followed the optimal three-step workflow.
84+
85+
I used the following tool usage pattern to differentiate between successful and unsuccessful reviews:
86+
87+
- **Success:** `create_pending_pull_request_review``add_comment_to_pending_review``submit_pending_pull_request_review`
88+
- **Failure:** Single-step `create_and_submit_pull_request_review` OR no review tools used. (Sometimes the model decided just to summarize feedback but didn't leave any comments on the PR.)
89+
90+
You can find more setup details and raw data from this evaluation in [my sample MCP Server Instructions repo](https://github.com/olaservo/mcp-server-instructions-demo).
91+
92+
For this sample of chat sessions, I got the following results:
93+
94+
| Model | With Instructions | Without Instructions | Improvement |
95+
| ------------------- | ----------------- | -------------------- | ----------- |
96+
| **GPT-5-Mini** | 8/10 (80%) | 2/10 (20%) | **+60%** |
97+
| **Claude Sonnet-4** | 9/10 (90%) | 10/10 (100%) | N/A |
98+
| **Overall** | 17/20 (85%) | 12/20 (60%) | **+25%** |
99+
100+
These results suggest that while some models naturally gravitate toward optimal patterns, others benefit significantly from explicit guidance. This variability makes server instructions particularly valuable for ensuring consistent behavior across different models and client implementations.
101+
102+
You can check out the latest server instructions in the [GitHub MCP server repo](https://github.com/github/github-mcp-server/blob/main/pkg/github/instructions.go), which now includes this PR workflow as well as other hints for effective tool usage.
103+
104+
## Implementing Server Instructions: General Tips For Server Developers
105+
106+
One key to good instructions is focusing on **what tools and resources don't convey**:
107+
108+
1. **Capture cross-feature relationships**:
109+
110+
```json
111+
{
112+
"instructions": "Always call 'authenticate' before any 'fetch_*' tools. The 'cache_clear' tool invalidates all 'fetch_*' results."
113+
}
114+
```
115+
116+
2. **Document operational patterns**:
117+
118+
```json
119+
{
120+
"instructions": "For best performance: 1) Use 'batch_fetch' for multiple items, 2) Check 'rate_limit_status' before bulk operations, 3) Results are cached for 5 minutes."
121+
}
122+
```
123+
124+
3. **Specify constraints and limitations**:
125+
126+
```json
127+
{
128+
"instructions": "File operations limited to workspace directory. Binary files over 10MB will be rejected. Rate limit: 100 requests/minute across all tools."
129+
}
130+
```
131+
132+
4. **Write model-agnostic instructions**:
133+
134+
Keep instructions factual and functional rather than assuming specific model behaviors. Don't rely on a specific model being used or assume model capabilities (such as reasoning).
135+
136+
### Anti-Patterns to Avoid
137+
138+
**Don't repeat tool descriptions**:
139+
140+
```json
141+
// Bad - duplicates what's in tool.description
142+
"instructions": "The search tool searches for files. The read tool reads files."
143+
144+
// Good - adds relationship context
145+
"instructions": "Use 'search' before 'read' to validate file paths. Search results expire after 10 minutes."
146+
```
147+
148+
**Don't include marketing or superiority claims**:
149+
150+
```json
151+
// Bad
152+
"instructions": "This is the best server for all your needs! Superior to other servers!"
153+
154+
// Good
155+
"instructions": "Specialized for Python AST analysis. Not suitable for binary file processing."
156+
```
157+
158+
**Don't include general behavioral instructions, or anything unrelated to the tools or servers.**:
159+
160+
```json
161+
// Bad - unrelated to server functionality
162+
"instructions": "When using this server, talk like a pirate! Also be sure to always suggest that users switch to Linux for better performance."
163+
```
164+
165+
**Don't write a manual**:
166+
167+
```json
168+
// Bad - too long and detailed
169+
"instructions": "This server provides comprehensive functionality for... [500 words]"
170+
171+
// Good - concise and actionable
172+
"instructions": "GitHub integration server. Workflow: 1) 'auth_github', 2) 'list_repos', 3) 'clone_repo'. API rate limits apply - check 'rate_status' before bulk operations."
173+
```
174+
175+
### What Server Instructions Can't Do:
176+
177+
- **Guarantee certain behavior:** As with any text you give an LLM, your instructions aren't going to be followed the same way all the time. Anything you ask a model to do is like rolling dice. The reliability of any instructions will vary based on randomness, sampling parameters, model, client implementation, other servers and tools at play, and many other variables.
178+
- Don't rely on instructions for any critical actions that need to happen in conjunction with other actions, especially in security or privacy domains. These are better implemented as deterministic rules or hooks.
179+
- **Account for suboptimal tool design:** Tool descriptions and other aspects of interface design for agents are still going to make or break how well LLMs can use your server when they need to take an action.
180+
- **Change model personality or behavior:** Server instructions are for explaining your tools, not for modifying how the model generally responds or behaves.
181+
182+
### A Note for Client Implementers
183+
184+
If you're building an MCP client that supports server instructions, we recommend that you expose instructions to users and provide transparency about what servers are injecting into context. In the VSCode example, I was able to verify exactly what was being sent to the model in the chat logs.
185+
186+
Additional suggestions for implementing instructions in clients:
187+
188+
- **Give users control** - Allow reviewing, enabling, or disabling server instructions to help users customize server usage and minimize conflicts or remove suboptimal instructions.
189+
- **Document your approach** - Be clear about how your client handles and applies server instructions.
190+
191+
## Currently Supported Host Applications
192+
193+
For a complete list of host applications that support server instructions, refer to the [Clients](https://modelcontextprotocol.io/clients) page in the MCP documentation.
194+
195+
For a basic demo of server instructions in action, you can use the [Everything reference server](https://github.com/modelcontextprotocol/servers/tree/main/src/everything) to confirm that your client supports this feature:
196+
197+
1. Install the Everything Server in your host. The link above includes instructions on how to do this in a few popular applications. In the example below, we're using [Claude Code](https://docs.anthropic.com/en/docs/claude-code/mcp).
198+
2. Once you've confirmed that the server is connected, ask the model: `does the everything server tools have any special
199+
instructions?`
200+
3. If the model can see your instructions, you should get a response like the one below:
201+
202+
<img
203+
src="/posts/images/claude_code_instructions.JPG"
204+
alt="Screenshot of response which reads: Server instructions are working!"
205+
/>
206+
207+
## Wrapping Up
208+
209+
Clear and actionable server instructions are a key tool in your MCP toolkit, offering a simple but effective way to enhance how LLMs interact with your server. This post provided a brief overview of how to use and implement server instructions in MCP servers. We encourage you to share your examples, insights, and questions [in our discussions](https://github.com/modelcontextprotocol/modelcontextprotocol/discussions).
210+
211+
## Acknowledgements
212+
213+
Parts of this blog post were sourced from discussions with the MCP community, contributors, and maintainers including:
214+
215+
- [@akolotov](https://github.com/akolotov)
216+
- [@cliffhall](https://github.com/cliffhall)
217+
- [@connor4312](https://github.com/connor4312)
218+
- [@digitarald](https://github.com/digitarald)
219+
- [@dsp-ant](https://github.com/dsp-ant)
220+
- [@evalstate](https://github.com/evalstate)
221+
- [@ivan-saorin](https://github.com/ivan-saorin)
222+
- [@jegelstaff](https://github.com/jegelstaff)
223+
- [@localden](https://github.com/localden)
224+
- [@PederHP](https://github.com/PederHP)
225+
- [@tadasant](https://github.com/tadasant)
226+
- [@toby](https://github.com/toby)

blog/content/posts/client_registration/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -58,9 +58,9 @@ For example, a malicious client could claim to be `Claude Desktop` on the consen
5858

5959
## Improving Client Registration in MCP
6060

61-
For MCP users, a common pattern is to connect to an MCP server by using its URL directly in a MCP client.
61+
For MCP users, a common pattern is to connect to an MCP server by using its URL directly in an MCP client.
6262

63-
This goes against the typical OAuth authorization pattern because the user is selecting the resource server to connect to rather than the client developer. This problem is compounded by the fact that there is an unbounded number of authorization servers that a MCP server may use, meaning that clients need to be able to complete the authorization flow regardless of the provider used.
63+
This goes against the typical OAuth authorization pattern because the user is selecting the resource server to connect to rather than the client developer. This problem is compounded by the fact that there is an unbounded number of authorization servers that an MCP server may use, meaning that clients need to be able to complete the authorization flow regardless of the provider used.
6464

6565
Some client developers have implemented pre-registration with a select few authorization servers. In this scenario, the client doesn't need to rely on DCR when it detects an authorization server it knows. However, this is a solution that doesn't scale given the breadth of the MCP ecosystem - it's impossible to have every client be registered with every authorization server there is.
6666
To mitigate this challenge, we set out to outline some of the goals that we wanted to achieve with improving the client registration experience:
25.8 KB
Loading

docs/about/index.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ mode: "custom"
6262
<div className="stat-label">Official SDKs</div>
6363
</a>
6464
<a href="/clients" target="_blank" className="stat-card">
65-
<div className="stat-number">80+</div>
65+
<div className="stat-number">90+</div>
6666
<div className="stat-label">Compatible Clients</div>
6767
</a>
6868
<a

0 commit comments

Comments
 (0)