|
1 | 1 | # Advanced Sampling Prompt |
| 2 | + |
| 3 | +👨💼 Now that you've wired up basic sampling, it's time to make your prompt |
| 4 | +have the LLM do something actually useful! Instead of just asking for a generic |
| 5 | +response, you'll craft a prompt that enables the model to suggest relevant tags |
| 6 | +for a new journal entry. |
| 7 | + |
| 8 | +**Here's what you'll do:** |
| 9 | + |
| 10 | +- Update your sampling request to provide the LLM with structured information: |
| 11 | + the new journal entry, its current tags, and all existing tags in the system. |
| 12 | +- Change the user message to send this data as JSON (`application/json`), not |
| 13 | + plain text. |
| 14 | +- Write a clear, detailed system prompt that instructs the LLM to make a list of |
| 15 | + suggested tags that are relevant to the entry and not already applied. Make |
| 16 | + certain it's instructed on the format of the response. |
| 17 | +- Increase the `maxTokens` to allow for a longer, more detailed response. |
| 18 | +- Test and iterate on your prompt! Try pasting the example JSON into your |
| 19 | + favorite LLM playground and see how it responds. Refine your instructions |
| 20 | + until you get the output you want. |
| 21 | + |
| 22 | +<details> |
| 23 | +<summary>Development workflow</summary> |
| 24 | + |
| 25 | +🦉 You can use the JSON below to test your prompt: |
| 26 | + |
| 27 | +1. Write your prompt into the LLM chat |
| 28 | +2. Let it respond (It'll probably ask you to provide the JSON) |
| 29 | +3. Paste the JSON below into the chat and let it respond again |
| 30 | +4. Evaluate the response (make sure it's in the right format) |
| 31 | +5. Repeat in new chats until you're happy with the prompt/response |
| 32 | + |
| 33 | +```json |
| 34 | +{ |
| 35 | + "entry": { |
| 36 | + "id": 6, |
| 37 | + "title": "Day at the Beach with Family", |
| 38 | + "content": "Spent the whole day at the beach with the family and it couldn't have been better. The kids were totally absorbed in building a massive sandcastle—complete with towers, moats, and even a seaweed flag. We played catch, flew a kite, and waded into the water until our fingers turned into prunes. Rebecca and I went on a shell hunt and found a few keepers. Lunch was sandy PB&Js and watermelon under a big striped umbrella. We stayed until sunset, which painted the sky with ridiculous pinks and oranges. Everyone was sun-tired and happy. Grateful for days like this.", |
| 39 | + "mood": "grateful", |
| 40 | + "location": "beach", |
| 41 | + "weather": "sunny", |
| 42 | + "isPrivate": 0, |
| 43 | + "isFavorite": 1, |
| 44 | + "createdAt": 1746668878, |
| 45 | + "updatedAt": 1746668878, |
| 46 | + "tags": [{ "id": 1, "name": "Family" }] |
| 47 | + }, |
| 48 | + "currentTags": [ |
| 49 | + { |
| 50 | + "id": 1, |
| 51 | + "name": "Family", |
| 52 | + "description": "Spending time with family members", |
| 53 | + "createdAt": 1746666966, |
| 54 | + "updatedAt": 1746666966 |
| 55 | + } |
| 56 | + ], |
| 57 | + "existingTags": [ |
| 58 | + { |
| 59 | + "id": 1, |
| 60 | + "name": "Family", |
| 61 | + "description": "Spending time with family members", |
| 62 | + "createdAt": 1746666966, |
| 63 | + "updatedAt": 1746666966 |
| 64 | + }, |
| 65 | + { |
| 66 | + "id": 2, |
| 67 | + "name": "Outdoors", |
| 68 | + "description": "Entries about being outside in nature or open spaces", |
| 69 | + "createdAt": 1746667900, |
| 70 | + "updatedAt": 1746667900 |
| 71 | + }, |
| 72 | + { |
| 73 | + "id": 3, |
| 74 | + "name": "Exercise", |
| 75 | + "description": "Physical activity or movement", |
| 76 | + "createdAt": 1746668000, |
| 77 | + "updatedAt": 1746668000 |
| 78 | + }, |
| 79 | + { |
| 80 | + "id": 4, |
| 81 | + "name": "Food", |
| 82 | + "description": "Eating, meals, or anything food-related", |
| 83 | + "createdAt": 1746668001, |
| 84 | + "updatedAt": 1746668001 |
| 85 | + } |
| 86 | + ] |
| 87 | +} |
| 88 | +``` |
| 89 | + |
| 90 | +</details> |
| 91 | + |
| 92 | +This step will help you practice prompt engineering for structured outputs, and |
| 93 | +show you how to use the full power of MCP's sampling API for real-world tasks. |
0 commit comments