Skip to content

Commit ee07b69

Browse files
committed
Rewrite sampling overview and example to reduce complexity
1 parent 0f4b59d commit ee07b69

File tree

1 file changed

+17
-8
lines changed

1 file changed

+17
-8
lines changed

docs/docs/concepts/sampling.mdx

Lines changed: 17 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: "Sampling"
33
description: "Let your servers request completions from client LLMs"
44
---
55

6-
Sampling is a powerful MCP feature that allows servers to request LLM completions through the client, enabling sophisticated agentic behaviors such as agent-to-agent communication while maintaining security and privacy.
6+
Sampling is a powerful MCP feature that allows servers to request LLM completions through the client, enabling sophisticated LLM-enhanced behaviors while maintaining security and privacy.
77

88
<Info>
99

@@ -13,15 +13,16 @@ This feature of MCP is not yet supported in the Claude Desktop client.
1313

1414
## Overview
1515

16-
Sampling allows an MCP server to request completions from (or "sample") an LLM owned by an MCP client. This enables complex collaborative interactions within other MCP features, such as [tools](/docs/concepts/tools) and [prompts](/docs/concepts/prompts).
16+
Sampling allows an MCP server to request completions from (or "sample") an LLM controlled by an MCP client.
17+
This enables servers to leverage an LLM as part of other MCP interactions, such as [tools](/docs/concepts/tools) and [prompts](/docs/concepts/prompts), without needing additional infrastructure or configuration to directly integrate with model providers themselves.
1718

18-
Sampling enables agentic patterns such as:
19+
Sampling flows generally follow these steps:
1920

20-
- Negotiating interactions with other LLM-controlled agents
21-
- Providing interactive assistance to users
22-
- Making decisions based on context
23-
- Generating natural language data
24-
- Handling multi-step tasks
21+
1. The server sends a `sampling/createMessage` request to the client, containing a prompt and other information
22+
2. The client reviews the request and may modify it
23+
3. The client requests the completion from its own LLM
24+
4. The client reviews the completion
25+
5. The client responds to the server with the LLM-generated completion
2526

2627
## Capabilities
2728

@@ -42,6 +43,9 @@ Sampling can be a standalone server-initiated interaction, but works best when u
4243

4344
### Server
4445

46+
This server exposes a single `add` tool which asks the client's LLM for the sum of the two inputs. Upon receiving a response from the client, the server extracts the result and returns the tool call result back to the client.
47+
In effect, the server has made its own request from the client within that client's existing tool call request.
48+
4549
```typescript
4650
import assert from "node:assert";
4751
import { z } from "zod";
@@ -97,6 +101,9 @@ await server.connect(transport);
97101

98102
### Client
99103

104+
The client is responsible for collecting the server's system prompt and messages into a single request to send to the model provider it uses.
105+
In this toy example, we simply have the client return a fixed text response, instead.
106+
100107
```typescript
101108
const transport = new StdioClientTransport({
102109
command: "node",
@@ -155,6 +162,8 @@ const toolResult = await client.callTool({
155162

156163
**Result:**
157164

165+
Upon running this example, the sampling request executes within the tool call, and the final result is returned as expected.
166+
158167
```typescript
159168
{
160169
content: [{ type: "text", text: "The sum of 1 and 2 is 3." }];

0 commit comments

Comments
 (0)