Understand what Claude Code actually does. Improve your agentic workflows.
Hooks that let you see inside Claude Code's multi-agent system:
- Track every prompt - What you asked, what subagents received
- See skill usage - Which skills fired, which agent used them, did they help?
- Understand delegation - When Claude spawns subagents, what prompts do they get?
- Catch drift - Where did Claude go off track? What context was missing?
- Safety guardrails - Prevent accidental damage while you experiment
| You're a... | What you get |
|---|---|
| Solo dev | Review your sessions, debug skills, improve prompts |
| Team | Logs separated by user, share learnings, async handoffs |
Logs organized by {user}/{branch}/{date}/ - no collisions, easy to find.
When a skill is invoked, these hooks capture:
{
"timestamp": "2026-01-26T10:30:00Z",
"skill": "docker",
"event": "start",
"invoked_by": "main", // or "Explore", "Plan", etc.
"session_id": "abc123"
}On completion:
{
"timestamp": "2026-01-26T10:30:05Z",
"skill": "docker",
"event": "end",
"status": "success",
"invoked_by": "main",
"output_size": 2048
}Key insight: The invoked_by field tells you whether the main agent or a subagent used the skill. This helps you understand if skills are being used where you expect.
When Claude spawns a subagent (Explore, Plan, Bash, etc.), we capture:
{
"timestamp": "2026-01-26T10:31:00Z",
"event": "start",
"agent_id": "pending-1706234567-1234",
"subagent_type": "Explore",
"model": "haiku",
"run_in_background": false,
"prompt_summary": "Find all files related to authentication...",
"prompt_length": 847
}Full prompts are stored separately in .claude/context/subagents/{agent_id}.prompt so you can see exactly what context the subagent received.
Claude Code is a multi-agent system. When you give it a task:
- It might spawn subagents (Explore, Plan, Bash, etc.)
- Each subagent gets a different prompt
- Skills may or may not activate
- Things happen you can't see
Without observability, you're flying blind:
- "Why didn't my skill activate?"
- "What prompt did the subagent actually get?"
- "Where did Claude lose context?"
- "Is my skill even being used correctly?"
This setup logs everything to .claude/logs/:
.claude/logs/{user}/{branch}/{date}/
├── prompts.jsonl # Every prompt you sent
├── tasks.jsonl # Subagent delegations (with FULL prompts)
├── skills.jsonl # Skill invocations + which agent used them
├── errors.jsonl # What failed and why
├── reads.jsonl # Files Claude read
├── audit.jsonl # Unified timeline
Now you can:
- Debug skill activation - See if skills matched, who invoked them
- Review subagent prompts - Understand what context was passed
- Find drift points - Where Claude went off track
- Improve your prompts - Learn what works
# Option A: Using make
make install DEST=your-project/
# Option B: Manual copy
cp -r .claude your-project/Logs appear automatically in .claude/logs/.
# What prompts did I send today?
cat .claude/logs/*/*/$(date +%Y-%m-%d)/prompts.jsonl | jq
# What skills were used?
cat .claude/logs/*/*/*/skills.jsonl | jq
# What did subagents receive?
cat .claude/logs/*/*/*/tasks.jsonl | jq 'select(.event=="start")'
# What failed?
cat .claude/logs/*/*/*/errors.jsonl | jq# Which skills are actually being used?
cat .claude/logs/*/*/*/skills.jsonl | jq -r '.skill' | sort | uniq -c | sort -rn
# Was it the main agent or a subagent that used the skill?
cat .claude/logs/*/*/*/skills.jsonl | jq '{skill, invoked_by}'Common discovery: "My skill only fires 20% of the time because prompts are too short."
# What prompts are subagents getting?
cat .claude/logs/*/*/*/tasks.jsonl | jq 'select(.event=="start") | {subagent_type, prompt_summary}'Common discovery: "The Explore agent isn't getting enough context about what to look for."
# Timeline of a session
cat .claude/logs/*/*/*/audit.jsonl | jq -s 'sort_by(.timestamp)'Common discovery: "Claude read the wrong file first, then everything went off track."
While you're experimenting, these prevent accidents:
| Blocked | Why |
|---|---|
rm -rf /, DROP DATABASE |
Catastrophic |
Edits on main/master branch |
Protect production |
kubectl delete namespace |
Cluster destruction |
| Requires Confirmation | Why |
|---|---|
.env, *.pem, *.key |
Secrets |
Dockerfile, *.tf |
Infrastructure |
The main value: use logs to improve your Claude Code skills.
- Check
skills.jsonl- is it being invoked at all? - Check the prompt length - short prompts skip skill evaluation
- Check
invoked_by- maybe subagents should use it but aren't
- Check
tasks.jsonlfor theprompt_summary - Read the full prompt from
.claude/context/subagents/{agent_id}.prompt - Adjust your main prompt to include missing context
- Check
reads.jsonl- what files did it read first? - Check
audit.jsonltimeline - where did it diverge? - Check
errors.jsonl- did something fail silently?
Full customization guide: CUSTOMIZATION.md - detailed reference for team leads.
Edit .claude/settings.json:
Disable a hook: Remove its entry from the config.
Enable auto-commit on session end: Add to Stop array:
{
"type": "command",
"command": "bash \"$CLAUDE_PROJECT_DIR\"/.claude/hooks/automation/git-auto-commit.sh",
"timeout": 30
}| Hook | Why | Enable If... |
|---|---|---|
git-auto-commit.sh |
May create unwanted commits | You want auto-save |
cost-tracker.sh |
Requires setup | You track API costs |
Logs are organized by {user}/{branch}/{date}/ so multiple team members don't overwrite each other.
make testValidates hooks are correctly configured.
For deeper analysis, you can also use LangSmith:
export LANGSMITH_TRACING=true
export LANGSMITH_API_KEY="your-key".claude/
├── settings.json # Hook configuration
├── hooks/
│ ├── observability/ # 14 tracking hooks
│ ├── guardrails/ # 5 safety hooks
│ ├── team/ # 4 collaboration hooks
│ ├── automation/ # 2 productivity hooks
│ └── advanced/ # 5 extra hooks
└── logs/{user}/{branch}/{date}/
Built for people who want to understand and improve their Claude Code workflows.