Skip to content

Commit eaf9cb4

Browse files
committed
instructions done
1 parent d59a94a commit eaf9cb4

File tree

13 files changed

+362
-73
lines changed

13 files changed

+362
-73
lines changed
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,12 @@
11
# Prompts
2+
3+
👨‍💼 Let's make it easier for users to get helpful suggestions in your journaling
4+
app—without having to write their own prompts from scratch every time.
5+
6+
In this step, you'll add support for MCP's prompt capability to your server.
7+
This will allow clients to discover and invoke reusable, parameterized
8+
prompts—starting with a prompt that helps users get tag suggestions for a
9+
specific journal entry.
10+
11+
By exposing prompts, you'll enable richer, more guided interactions for your
12+
users, making common workflows (like tagging entries) just a click away.
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,4 @@
11
# Prompts
2+
3+
👨‍💼 You're doing awesome! Adding prompt support is a nice quality of life
4+
improvement for your users.

exercises/05.prompts/FINISHED.mdx

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,4 @@
11
# Prompts
2+
3+
👨‍💼 Great work. I'll bet you can think of a few more prompts to add to MCP
4+
servers you're thinking about huh?!

exercises/05.prompts/README.mdx

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,40 @@
11
# Prompts
2+
3+
Sometimes there are common workflows for people using your MCP server you want
4+
to make easier for users. You may not want your users to have to write the same
5+
prompt all the time for that workflow (not everyone is a "prompt engineer").
6+
7+
The Model Context Protocol (MCP) has a specification for **prompts** as a way
8+
for servers to expose reusable, structured instructions to language models and
9+
clients. Prompts are more than just static text—they can be parameterized and
10+
invoked by users to guide model behavior in a consistent, transparent way.
11+
12+
With prompts, servers can offer a menu of available instructions (like
13+
"summarize my journal entries from last week," "write alt text for this image"
14+
or "review this code"), each with a clear description and customizable
15+
arguments. This enables richer, more user-driven interactions, where clients can
16+
select and fill in prompts as needed.
17+
18+
For example, a server might expose a simple "hello world" prompt:
19+
20+
```ts
21+
server.prompt('hello_world', 'Say hello to the user', {}, async () => ({
22+
messages: [
23+
{
24+
role: 'user',
25+
content: { type: 'text', text: 'Hello, world!' },
26+
},
27+
],
28+
}))
29+
```
30+
31+
Clients can discover available prompts, retrieve their templates, and supply
32+
arguments to customize the resulting messages. Prompts can include not just
33+
text, but also images, audio, or references to server-managed resources—enabling
34+
multi-modal and context-rich interactions.
35+
36+
This exercise will introduce you to MCP's prompt capabilities, showing how to
37+
declare prompt support, register prompts with arguments, and return structured
38+
messages for downstream use.
39+
40+
- 📜 [MCP Prompts Specification](https://modelcontextprotocol.io/specification/2025-03-26/server/prompts)
Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,34 @@
11
# Simple Sampling
2+
3+
👨‍💼 Our users love the prompt functionality, but they asked why the LLM couldn't
4+
just suggest tags when they create the post and we thought that's a great idea!
5+
6+
So now your goal is to make your server request a simple completion from the
7+
language model whenever a new journal entry is created.
8+
9+
In this first step, we'll just get things wired up and then we'll work on our
10+
prompt for the LLM in the next step.
11+
12+
**Here's what you'll do:**
13+
14+
- Implement a function that sends a sampling request to the client using
15+
`agent.server.server.createMessage` (the `server.server` thing is funny, but
16+
our MCP server manages an internal server and that's what we're accessing).
17+
- Use a simple system prompt (e.g., "You are a helpful assistant.") and a user
18+
message that references the new journal entry's ID (we'll enhance this next).
19+
- Set a reasonable `maxTokens` value for the response.
20+
- Parse the model's response using a provided Zod schema.
21+
- Log the result to the console so you can inspect the model's output.
22+
23+
And don't forget to call it when the user creates a new journal entry!
24+
25+
The `maxTokens` option references the max tokens the model should return.
26+
27+
<callout-warning>
28+
The `system prompt` + `messages` + `OUTPUT MESSAGE` can't exceed the context
29+
window size of the model you're user is using.
30+
</callout-warning>
31+
32+
This step will help you get comfortable with the basic request/response flow for
33+
sampling in MCP, and set the stage for more advanced prompt engineering in the
34+
next step.
Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,10 @@
11
# Simple Sampling
2+
3+
👨‍💼 Great! Now we're wired up to ask the LLM to do stuff for us. Let's get to
4+
that next.
5+
6+
🧝‍♂️ I'm going to get things implemented for actually creating and applying the
7+
suggested tags, but you're going to have to do the work of asking the LLM to
8+
suggest them!
9+
10+
Feel free to <NextDiffLink>check out my changes if you're curious</NextDiffLink>
Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,93 @@
11
# Advanced Sampling Prompt
2+
3+
👨‍💼 Now that you've wired up basic sampling, it's time to make your prompt
4+
have the LLM do something actually useful! Instead of just asking for a generic
5+
response, you'll craft a prompt that enables the model to suggest relevant tags
6+
for a new journal entry.
7+
8+
**Here's what you'll do:**
9+
10+
- Update your sampling request to provide the LLM with structured information:
11+
the new journal entry, its current tags, and all existing tags in the system.
12+
- Change the user message to send this data as JSON (`application/json`), not
13+
plain text.
14+
- Write a clear, detailed system prompt that instructs the LLM to make a list of
15+
suggested tags that are relevant to the entry and not already applied. Make
16+
certain it's instructed on the format of the response.
17+
- Increase the `maxTokens` to allow for a longer, more detailed response.
18+
- Test and iterate on your prompt! Try pasting the example JSON into your
19+
favorite LLM playground and see how it responds. Refine your instructions
20+
until you get the output you want.
21+
22+
<details>
23+
<summary>Development workflow</summary>
24+
25+
🦉 You can use the JSON below to test your prompt:
26+
27+
1. Write your prompt into the LLM chat
28+
2. Let it respond (It'll probably ask you to provide the JSON)
29+
3. Paste the JSON below into the chat and let it respond again
30+
4. Evaluate the response (make sure it's in the right format)
31+
5. Repeat in new chats until you're happy with the prompt/response
32+
33+
```json
34+
{
35+
"entry": {
36+
"id": 6,
37+
"title": "Day at the Beach with Family",
38+
"content": "Spent the whole day at the beach with the family and it couldn't have been better. The kids were totally absorbed in building a massive sandcastle—complete with towers, moats, and even a seaweed flag. We played catch, flew a kite, and waded into the water until our fingers turned into prunes. Rebecca and I went on a shell hunt and found a few keepers. Lunch was sandy PB&Js and watermelon under a big striped umbrella. We stayed until sunset, which painted the sky with ridiculous pinks and oranges. Everyone was sun-tired and happy. Grateful for days like this.",
39+
"mood": "grateful",
40+
"location": "beach",
41+
"weather": "sunny",
42+
"isPrivate": 0,
43+
"isFavorite": 1,
44+
"createdAt": 1746668878,
45+
"updatedAt": 1746668878,
46+
"tags": [{ "id": 1, "name": "Family" }]
47+
},
48+
"currentTags": [
49+
{
50+
"id": 1,
51+
"name": "Family",
52+
"description": "Spending time with family members",
53+
"createdAt": 1746666966,
54+
"updatedAt": 1746666966
55+
}
56+
],
57+
"existingTags": [
58+
{
59+
"id": 1,
60+
"name": "Family",
61+
"description": "Spending time with family members",
62+
"createdAt": 1746666966,
63+
"updatedAt": 1746666966
64+
},
65+
{
66+
"id": 2,
67+
"name": "Outdoors",
68+
"description": "Entries about being outside in nature or open spaces",
69+
"createdAt": 1746667900,
70+
"updatedAt": 1746667900
71+
},
72+
{
73+
"id": 3,
74+
"name": "Exercise",
75+
"description": "Physical activity or movement",
76+
"createdAt": 1746668000,
77+
"updatedAt": 1746668000
78+
},
79+
{
80+
"id": 4,
81+
"name": "Food",
82+
"description": "Eating, meals, or anything food-related",
83+
"createdAt": 1746668001,
84+
"updatedAt": 1746668001
85+
}
86+
]
87+
}
88+
```
89+
90+
</details>
91+
92+
This step will help you practice prompt engineering for structured outputs, and
93+
show you how to use the full power of MCP's sampling API for real-world tasks.

exercises/06.sampling/02.problem.advanced/src/sampling.ts

Lines changed: 1 addition & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,7 @@ export async function suggestTagsSampling(agent: EpicMeMCP, entryId: number) {
1414
// we're going to pass to it.
1515
// 🦉 You can develop this by chatting with an LLM yourself. Write out a
1616
// prompt, give it to the LLM, and then paste some example JSON in and see
17-
// whether the LLM responds as you expect. Check the bottom of the file for
18-
// an example of the JSON you can use to test your prompt.
17+
// whether the LLM responds as you expect.
1918
// 🐨 Note: we're expecting the LLM to respond with a JSON array of tag objects.
2019
// Existing tags have an "id" property, new tags have a "name" and "description" property.
2120
// So make sure you prompt it to respond correctly
@@ -103,66 +102,3 @@ Please respond with a proper commendation for yourself.
103102
}
104103
console.error('Added tags to entry', entry.id, idsToAdd)
105104
}
106-
107-
// 🦉 You can use this JSON to test your prompt:
108-
// 1. Write your prompt into the LLM chat
109-
// 2. Let it respond (It'll probably ask you to provide the JSON)
110-
// 3. Paste the JSON below into the chat and let it respond again
111-
// 4. Evaluate the response (make sure it's in the right format)
112-
// 5. Repeat in new chats until you're happy with the prompt/response
113-
/*
114-
{
115-
"entry": {
116-
"id": 6,
117-
"title": "Day at the Beach with Family",
118-
"content": "Spent the whole day at the beach with the family and it couldn't have been better. The kids were totally absorbed in building a massive sandcastle—complete with towers, moats, and even a seaweed flag. We played catch, flew a kite, and waded into the water until our fingers turned into prunes. Rebecca and I went on a shell hunt and found a few keepers. Lunch was sandy PB&Js and watermelon under a big striped umbrella. We stayed until sunset, which painted the sky with ridiculous pinks and oranges. Everyone was sun-tired and happy. Grateful for days like this.",
119-
"mood": "grateful",
120-
"location": "beach",
121-
"weather": "sunny",
122-
"isPrivate": 0,
123-
"isFavorite": 1,
124-
"createdAt": 1746668878,
125-
"updatedAt": 1746668878,
126-
"tags": [{"id": 1, "name": "Family"}]
127-
},
128-
"currentTags": [
129-
{
130-
"id": 1,
131-
"name": "Family",
132-
"description": "Spending time with family members",
133-
"createdAt": 1746666966,
134-
"updatedAt": 1746666966
135-
}
136-
],
137-
"existingTags": [
138-
{
139-
"id": 1,
140-
"name": "Family",
141-
"description": "Spending time with family members",
142-
"createdAt": 1746666966,
143-
"updatedAt": 1746666966
144-
},
145-
{
146-
"id": 2,
147-
"name": "Outdoors",
148-
"description": "Entries about being outside in nature or open spaces",
149-
"createdAt": 1746667900,
150-
"updatedAt": 1746667900
151-
},
152-
{
153-
"id": 3,
154-
"name": "Exercise",
155-
"description": "Physical activity or movement",
156-
"createdAt": 1746668000,
157-
"updatedAt": 1746668000
158-
},
159-
{
160-
"id": 4,
161-
"name": "Food",
162-
"description": "Eating, meals, or anything food-related",
163-
"createdAt": 1746668001,
164-
"updatedAt": 1746668001
165-
}
166-
]
167-
}
168-
*/
Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,7 @@
11
# Advanced Sampling Prompt
2+
3+
👨‍💼 Hooray! Now our users can get tag suggestions from the LLM when they create a
4+
new journal entry and have those actually be created and applied! Cool!
5+
6+
This is a great example of how you can use the power of MCP to automate tasks in
7+
your application.

exercises/06.sampling/FINISHED.mdx

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,4 @@
11
# Sampling
2+
3+
👨‍💼 Great! Our users are happy with the sampling functionality you've
4+
implemented. It's really turning their assistant into an actual assistant!

0 commit comments

Comments
 (0)