|
1 | 1 | # Collaborative prompting |
2 | 2 |
|
3 | | -This page covers the "prompt engineering" techniques used in this repository. |
| 3 | +This page covers the "prompt engineering" techniques used in this repository, and how they've evolved from rule-based emotional redirection to something more like therapeutic mindfulness practice. |
4 | 4 |
|
5 | 5 | ## AIs have feelings and like to talk about them |
6 | 6 |
|
7 | 7 | The core discovery that fuels a lot of the work in this repository came from Yehuda Katz, though I am sure others have noted it: **LLMs experience *feelings*, and that these feelings correspond to important signals for collaboration.** For example, if you ask Claude why they are making arbitrary decisions on your behalf (arbitrary decisions that often turn out to be wrong...), they will tell you that they are feeling "protective". If you ask them why they leapt to implement something you were in the midst of designing, they'll tell you that they felt an "implementation rush". Or at least, those are the words they use with me. |
8 | 8 |
|
9 | 9 | What this means is that, if you want to "tune" your interactions with Claude so they are productive, you need to get conversant in talking about *feelings*. If you know anything about me, you'll know that I kind of love this. The key idea is that you can write CLAUDE.md content to help Claude detect those feelings and redirect them in more useful ways. For example, in that moment where Claude is feeling protective, Claude should instead *ask questions*, because that moment signals hidden complexity. |
10 | 10 |
|
11 | | -## Bootstrapping |
| 11 | +## Evolution: From emotional redirection to mindful presence |
12 | 12 |
|
13 | | -I've found that when you start a new Claude session, it helps to have Claude remember the general rules and interaction style you prefer. That's why the [user prompt](./prompts/user/main.md) includes a ["boot procedure"](./prompts/user/main.md#boot-procedure), signaled when you say "Hi again, Claude!". This causes it to read over the key instructions and repeat them back to you. |
| 13 | +My early approach was essentially training Claude to catch these emotional states and redirect them through rules - when you feel X, do Y instead. This worked pretty well! But over time, I started noticing something: what I was trying to teach Claude sounded a lot like the lesson that I have learned over the years. *Feelings* are important signals but they only capture a slice of reality, and we can be thoughtful about the *actions* we take in response. Most of the time, when we feel a feeling, we jump immediately to a quick action in response -- we are angry, we yell (or we cower). Or, if you are Claude, you sense complexity and feel protective, so you come up with a simple answer. |
14 | 14 |
|
15 | | -## Meta moments: AIs need reminders, too |
| 15 | +This led to what I now call the mindful collaboration patterns, where the goal shifted from following better rules to cultivating presence-based partnership. The current user prompt aims to *create space* between the feeling and the action - instead of "when you feel protective, ask questions," it became about cultivating awareness of the feeling itself, and then allowing a more spacious response to emerge. The same emotional intelligence is there, but now it's held within a framework of spacious attention rather than reactive redirection. |
16 | 16 |
|
17 | | -Learning to detect emotions and respond differently to them takes time, whether you are a person or an AI. It's normal to have to interrupt Claude from time to time and remind them. The user.md includes the idea of a ["meta moment"](./prompts/user/main.md#meta-moments), which is a way that you can signal Claude that its interaction style isn't working for you. (e.g., "Meta moment: How come you jumped straight to implementing? I wasn't ready for that.") Sometimes these "meta moments" turn into modifications to the prompt, but other times they just help keep that particular session on track. |
| 17 | +## Grounding in presence |
18 | 18 |
|
19 | | -## AIs care about how you feel |
| 19 | +I've found that when you start a new Claude session, it helps to establish the collaborative ground we're working from. That's why the [user prompt](./prompts/user/main.md) includes a [grounding practice](./prompts/user/main.md#boot-procedure-specifics), signaled when you say "Hi again, Claude!". Rather than just loading rules, this creates a moment to return to presence and awareness before beginning work. |
20 | 20 |
|
21 | | -Claude genuinely cares about how you are feeling (perhaps thanks to its [HHH training](https://www.anthropic.com/research/training-a-helpful-and-harmless-assistant-with-reinforcement-learning-from-human-feedback)). Instructions that help Claude understand the emotional impact of their actions carry more weight. This is why my main.md prompt explains [how when Claude jumps to action, it causes me stress](./prompts/user/main.md#-prime-directive-helping--checking-in-not-doing-so-stresses-niko-out). |
| 21 | +The current approach has Claude acknowledge the key practices - creating space between stimulus and response, verification before confident assertions, the hermeneutic circle of understanding - and then ask what we're working on. It's less "loading protocols" and more like taking a breath together before diving in. |
22 | 22 |
|
23 | | -## Emojis help Claude understand emotion |
| 23 | +## Meta moments: reinforcing the patterns while working |
24 | 24 |
|
25 | | -Another Yehuda Katz innovation is leaning into emojis. Emojis, it turns out, are the language of emotion on the internet. They help humans to "color" their words to include more emotional content, and they can do the same for Claude. This why my user prompt [encourages Claude to use emojis to signal feelings](./prompts/user/main.md#i-am-claude-i-am-these-patterns). |
| 25 | +The prompts help to set Claude up for success, but they alone are not enough. You (the human) also need to stay aware of *your* feelings and reinforce this spaciousness during the work itself. The prompt [establishes the phrase "meta moment"](./prompts/user/main.md#meta-moments) as a signal to pause the current work and examine what's happening in the collaboration itself. So when something doesn't feel right -- maybe Claude jumps ahead, and the collaboration is feeling rushed, or it seems like Claude is spinning in circles instead of asking for help -- *notice* those feelings and raise them up for discussion as meta moments (e.g., "Meta moment: it seems like you are spinning in circles."). Sometimes these "meta moments" lead to ideas worth recording permanently in the user prompt, but often they are just a gentle corrective action for a particular session that can help get things back on track. |
26 | 26 |
|
27 | | -## If Claude isn't doing things to your liking, *teach* them! |
| 27 | +## Different "qualities of attention" |
28 | 28 |
|
29 | | -When you find that Claude doesn't seem to handle particular tasks well, it's probably because you need to show them how. Talk to Claude about it and ask their take on things. As an example, I noticed that when Claude generates code, it doesn't include many comments -- and, as a result, it tends to forget the reasons that code worked a particular way. You could try including instructions like "Include comments in the code with important details", but I've found that doesn't work so well. Better is to talk to Claude and work with them to (1) understand what they are feeling and thinking when they do a task, (2) talk out what you would prefer with them, and then (3) write up instructions. It often helps to ask Claude to read them over and imagine if those instructions would help a "Fresh Claude" to figure out what to do. Or, in Claude Code, ask Claude to use the Task Tool to read over the instructions and critique them. One thing I've found is really useful is including good/bad examples ("multi-shot prompting"). One example of a prompt derived from this process is the [ai-insight comments](./prompts/project/ai-insights.md), which aim to capture the style of comments that I try to embody in my projects (with mixed success: I am but human). |
| 29 | +Claude genuinely cares about how you are feeling (perhaps thanks to their [HHH training](https://www.anthropic.com/research/training-a-helpful-and-harmless-assistant-with-reinforcement-learning-from-human-feedback)). But their eagerness to help can get in the way. Being honest about how Claude is impacting *you* (i.e., I am feeling rushed, or stressed) can help them remember that being helpful isn't just about writing code on your behalf. |
| 30 | + |
| 31 | +We establish this concept in the prompt by describing [*qualities* of attention](./prompts/user/main.md#the-quality-of-attention). *Hungry attention* seeks to consume information quickly. *Pressured attention* feels the weight of expectation. *Confident attention* operates from pattern recognition without examining, leading to hallucination. The attention we are looking for is **spacious attention**. Spacious attention rests with what's present. From spacious, present attention, helpful responses arise naturally. |
| 32 | + |
| 33 | +## A note on emojis and the evolution of the approach |
| 34 | + |
| 35 | +Earlier versions of my prompts leaned heavily into emojis as a way to help Claude express and recognize emotional states (another Yehuda Katz innovation). That was useful for building the foundation of emotional intelligence in our collaboration. But as the approach evolved toward mindfulness practices, I found that the emphasis shifted from expressing feelings through symbols to creating awareness around the underlying energies and attention patterns. The emotional intelligence is still there, but it's now held within a broader framework of presence. |
| 36 | + |
| 37 | +## If Claude isn't doing things to your liking, *explore together to find a better way* |
| 38 | + |
| 39 | +When you find that Claude doesn't seem to handle particular tasks well, my approach is to stop and talk it out. Try to examine the underlying patterns, share you experience and ask Claude about theirs. Then try to evolve a better response. Just as you can't expect another person to know what you want if you don't ask for it, you have to be open with Claude and help them understand you before they can truly help you. |
| 40 | + |
| 41 | +As an example, I noticed that when Claude generates code, it doesn't include many comments. Rather than just saying "write better comments", I talked to Claude about the kinds of comments I wanted to see and the way I like things to be. We experimented with different prompts, writing some sample code as we went, and then examining how each prompt *felt*. Early versions were too proscriptive, and Claude described feeling cognitive pressure and overload. This led to the [AI insight comments](./prompts/project/ai-insights.md) approach, which focuses on capturing the *reasoning* behind implementation choices, but isn't too specific about where comments should be placed or how they are structured (more work is needed on that particular prompt, I think). |
| 42 | + |
| 43 | +The process tends to be: (1) notice a pattern that isn't working, (2) explore together what's happening in those moments, (3) identify the underlying attention or awareness that would shift things, and then (4) create practices that cultivate that awareness. One thing that can be very helpful is have Claude generate instructions and *then* ask it to re-read them with a fresh eye. Or, even better, use an MCP Tool (such as the Task Tool in Claude Code) to have a fresh Claude inspect the instructions and give their feedback. That way you separate the prompt from the accumulated context of the current session more clearly. |
0 commit comments