You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: blog/content/posts/2025-09-22-using-server-instructions.md
+18-12Lines changed: 18 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,17 +11,17 @@ Many of us are still exploring the nooks and crannies of MCP and learning how to
11
11
12
12
## The Problem
13
13
14
-
Imagine you're a Large Language Model (LLM) who just got handed a collection of tools from servers A, B, and C to complete a task. They might have already been carefully pre-selected or they might be more like what my workbench looks like in my garage - a mishmash of recently-used tools.
14
+
Imagine you're a Large Language Model (LLM) who just got handed a collection of tools from a database server, a file system server, and a notification server to complete a task. They might have already been carefully pre-selected or they might be more like what my workbench looks like in my garage - a mishmash of recently-used tools.
15
15
16
-
Now let's say that the developer of MCP Server A has pre-existing knowledge or preferences about how to best use their tools or prompts, as well as more background information about the underlying systems that power them.
16
+
Now let's say that the developer of the database server has pre-existing knowledge or preferences about how to best use their tools, as well as more background information about the underlying systems that power them.
17
17
18
18
Some examples could include:
19
19
20
-
- "Tool C should always be used after tool A and B"
21
-
- "This prompt or tool works best if specialized tools from other servers X and Y are available"
22
-
- "Server A tools are rate limited to 10 requests per minute"
23
-
- "Always look up the user's language and accessibility preferences before attempting to fetch any resources with this server."
24
-
- "Only use tool A to ask the user for their preferences if elicitation is supported. Otherwise, fall back to using default user preferences."
20
+
- "Always use `validate_schema` → `create_backup` → `migrate_schema` for safe database migrations"
21
+
- "When using the `export_data`tool, the file system server's `write_file` tool is required for storing local copies"
22
+
- "Database connection tools are rate limited to 10 requests per minute"
23
+
- "If `create_backup` fails, check if the notification server is connected before attempting to send alerts"
24
+
- "Only use `request_preferences`to ask the user for settings if elicitation is supported. Otherwise, fall back to using default configuration"
25
25
26
26
So now our question becomes: what's the most effective way to share this contextual knowledge?
27
27
@@ -34,17 +34,23 @@ Alternatively, relying on prompts to give common instructions means that:
34
34
- The prompt always needs to be selected by the user, and
35
35
- The instructions are more likely to get lost in the shuffle of other messages.
36
36
37
-
Imagine a pile of post-it notes, all filled with instructions on how to do things with a drawer full of tools. It's totally possible that you have the right notes lined up in front of you to do everything reliably, but it's not always the most efficient way to provide this type of context.
37
+
It's like having a pile of notes on my garage workbench, each trying to explain how different tools relate to each other. While you might find the right combination of notes, you'd rather have a single, clear manual that explains how everything works together.
38
38
39
-
For global instructions that you want the LLM to follow, it's best to inject them into the model's system prompt instead of including them in multiple tool descriptions or standalone prompts.
39
+
Similarly, for global instructions that you want the LLM to follow, it's best to inject them into the model's system prompt instead of including them in multiple tool descriptions or standalone prompts.
40
40
41
41
This is where **server instructions** come in. Server instructions give the server a way to inject information that the LLM should always read in order to understand how to use the server - independent of individual prompts, tools, or messages.
42
42
43
-
**Note:** Because server instructions may be injected into the system prompt, they should be written with caution and diligence. No instructions are better than poorly written instructions. Additionally, the exact way that the MCP host uses server instructions is up to the implementer, so it's not always guaranteed that they will be injected into the system prompt. It's always recommended to evaluate a client's behavior with your server and its tools before relying on this functionality.
43
+
### A Note on Implementation Variability
44
44
45
-
## Implementing Server Instructions Example: Optimizing Common GitHub Workflows
45
+
Because server instructions may be injected into the system prompt, they should be written with caution and diligence. No instructions are better than poorly written instructions.
46
46
47
-
A concrete example of server instructions in action comes from my experiments with the [GitHub MCP server](https://github.com/github/github-mcp-server). Even with advanced options like toolsets for optimizing tool selection, models may not consistently follow optimal multi-tool workflow patterns or struggle to 'learn' the right combinations of tools through trial and error.
47
+
Additionally, the exact way that the MCP host uses server instructions is up to the implementer, so it's not always guaranteed that they will be injected into the system prompt. It's always recommended to evaluate a client's behavior with your server and its tools before relying on this functionality.
48
+
49
+
We will get deeper into both of these considerations with concrete examples.
I tested server instructions using the official [GitHub MCP server](https://github.com/github/github-mcp-server) to see if they could improve how models handle complex workflows. Even with advanced features like toolsets, models may struggle to consistently follow optimal multi-step patterns without explicit guidance.
0 commit comments