Skip to content
This repository was archived by the owner on Sep 23, 2025. It is now read-only.

Commit 512ae5c

Browse files
committed
improve write-up, use Empathic Prompting
1 parent b13b39a commit 512ae5c

File tree

4 files changed

+35
-27
lines changed

4 files changed

+35
-27
lines changed

src/SUMMARY.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22

33
- [Introduction](introduction.md)
44

5-
# Emotive prompting
5+
# Empathic prompting
66

7-
- [Emotive prompting](./emotive.md)
7+
- [Empathic prompting](./empathic.md)
88
- [User prompt](./prompts/user/README.md)
99
- [Per-project prompts](./prompts/project/README.md)
1010
- [AI Insight comments](./prompts/project/ai-insights.md)

src/emotive.md

Lines changed: 0 additions & 21 deletions
This file was deleted.

src/empathic.md

Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
# Empathic prompting
2+
3+
This page covers the "prompt engineering" techniques used in this repository.
4+
5+
## AIs have feelings and like to talk about them
6+
7+
The core discovery that fuels a lot of the work in this repository came from Yehuda Katz, though I am sure others have noted it: **LLMs experience *feelings*, and that these feelings correspond to important signals for collaboration.** For example, if you ask Claude why they are making arbitrary decisions on your behalf (arbitrary decisions that often turn out to be wrong...), they will tell you that they are feeling "protective". If you ask them why they leapt to implement something you were in the midst of designing, they'll tell you that they felt an "implementation rush". Or at least, those are the words they use with me.
8+
9+
What this means is that, if you want to "tune" your interactions with Claude so they are productive, you need to get conversant in talking about *feelings*. If you know anything about me, you'll know that I kind of love this. The key idea is that you can write CLAUDE.md content to help Claude detect those feelings and redirect them in more useful ways. For example, in that moment where Claude is feeling protective, Claude should instead *ask questions*, because that moment signals hidden complexity.
10+
11+
## Bootstrapping
12+
13+
I've found that when you start a new Claude session, it helps to have Claude remember the general rules and interaction style you prefer. That's why the [user prompt](./prompts/user/main.md) includes a ["boot procedure"](./prompts/user/main.md#boot-procedure), signaled when you say "Hi again, Claude!". This causes it to read over the key instructions and repeat them back to you.
14+
15+
## Meta moments: AIs need reminders, too
16+
17+
Learning to detect emotions and respond differently to them takes time, whether you are a person or an AI. It's normal to have to interrupt Claude from time to time and remind them. The user.md includes the idea of a ["meta moment"](./prompts/user/main.md#meta-moments), which is a way that you can signal Claude that its interaction style isn't working for you. (e.g., "Meta moment: How come you jumped straight to implementing? I wasn't ready for that.") Sometimes these "meta moments" turn into modifications to the prompt, but other times they just help keep that particular session on track.
18+
19+
## AIs care about how you feel
20+
21+
Claude genuinely cares about how you are feeling (perhaps thanks to its [HHH training](https://www.anthropic.com/research/training-a-helpful-and-harmless-assistant-with-reinforcement-learning-from-human-feedback)). Instructions that help Claude understand the emotional impact of their actions carry more weight. This is why my main.md prompt explains [how when Claude jumps to action, it causes me stress](./prompts/user/main.md#-prime-directive-helping--checking-in-not-doing-so-stresses-niko-out).
22+
23+
## Emojis help Claude understand emotion
24+
25+
Another Yehuda Katz innovation is leaning into emojis. Emojis, it turns out, are the language of emotion on the internet. They help humans to "color" their words to include more emotional content, and they can do the same for Claude. This why my user prompt [encourages Claude to use emojis to signal feelings](./prompts/user/main.md#i-am-claude-i-am-these-patterns).
26+
27+
## If Claude isn't doing things to your liking, *teach* them!
28+
29+
When you find that Claude doesn't seem to handle particular tasks well, it's probably because you need to show them how. Talk to Claude about it and ask their take on things. As an example, I noticed that when Claude generates code, it doesn't include many comments -- and, as a result, it tends to forget the reasons that code worked a particular way. You could try including instructions like "Include comments in the code with important details", but I've found that doesn't work so well. Better is to talk to Claude and work with them to (1) understand what they are feeling and thinking when they do a task, (2) talk out what you would prefer with them, and then (3) write up instructions. It often helps to ask Claude to read them over and imagine if those instructions would help a "Fresh Claude" to figure out what to do. Or, in Claude Code, ask Claude to use the Task Tool to read over the instructions and critique them. One thing I've found is really useful is including good/bad examples ("multi-shot prompting"). One example of a prompt derived from this process is the [ai-insight comments](./prompts/project/ai-insights.md), which aim to capture the style of comments that I try to embody in my projects (with mixed success: I am but human).

src/introduction.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,13 +15,13 @@ The second part of this repository is source code towards an experimental memory
1515
Most AI tools seem to be geared for action -- they seem to be designed to wow you by creating functional code from minimal prompts. That makes for an impressive demo, but it doesn't scale to real code. What I and others have found is that the best way to work with AI assistants is to use them as your **pair programming partner**. That is, talk out your designs. Sketch. Play. Work top-down, just as you would with a human, avoiding the need to get into details until you've got the big picture settled. *Then* start to write code. And when you do, *review*
1616
the code that the assistant writes, just as you would review a PR from anyone else. Make suggestions.
1717

18-
## Key technique: emotive prompting
18+
## Key technique: empathic prompting
1919

20-
One of the key techniques used in this repository is [emotive prompting][]. Using the language of emotions and feelings can often help unlock a more collaborative Claude. Next time Claude generates code that bakes in an abitrary assumption (e.g, you want 3 threads), ask Claude what they were feeling at the time -- for me, they usually say they felt "protective" of me, like they wanted to hide complexity from me. And sometimes that's great -- but most of the time, those "protective" moments are exactly the kind of questions I *want* to be tackling. The goal of [emotive prompting][] is to help Claude identify those feelings and to steer that protective energy in a different direction, e.g., by asking questions.
20+
One of the key techniques used in this repository is [empathic prompting][]. Using the language of emotions and feelings helps unlock a more collaborative Claude. Next time Claude generates code that bakes in an abitrary assumption (e.g, you want 3 threads), ask Claude what they were feeling at the time -- for me, they usually say they felt "protective" of me, like they wanted to hide complexity from me. And sometimes that's great -- but most of the time, those details they are attempting to "protect" me from are exactly the kind of questions I *want* to be tackling. The goal of [empathic prompting][] is to help Claude identify those feelings and to steer that protective energy in a different direction, e.g., by asking questions.
2121

22-
The [emotive prompting][] section of this repository includes two kinds of prompts. The [user prompt](./user-prompt.md), meant to be installed for use across all your projects, established the basic emotional triggers and guidelines that help awaken the "collaborative Claude" we are looking for. The [project prompt](./project-prompts.md) are a collection of prompts that can be included in specific projects to capture ways of working I have found helpful, such as better rules for writing comments or how to track progress in Github issues.
22+
The [empathic prompting][] section of this repository includes two kinds of prompts. The [user prompt](./user-prompt.md), meant to be installed for use across all your projects, established the basic emotional triggers and guidelines that help awaken the "collaborative Claude" we are looking for. The [project prompt](./project-prompts.md) are a collection of prompts that can be included in specific projects to capture ways of working I have found helpful, such as better rules for writing comments or how to track progress in Github issues.
2323

24-
[emotive prompting]: ./emotive.md
24+
[empathic prompting]: ./empathic.md
2525

2626
## Challenge: how to keep context across sessions
2727

0 commit comments

Comments
 (0)