Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
188 changes: 188 additions & 0 deletions .claude/commands/amplihack/learnings.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,188 @@
---
description: View and manage cross-session learnings
arguments:
- name: action
description: "Action to perform: show, search, add, stats"
required: false
- name: query
description: "Category name or search query"
required: false
---

# /amplihack:learnings Command

Manage cross-session learnings stored in `.claude/data/learnings/`.

## Actions

### show [category]

Display learnings from all categories or a specific category.

**Categories:** errors, workflows, tools, architecture, debugging

**Examples:**

- `/amplihack:learnings show` - Show all learnings
- `/amplihack:learnings show errors` - Show only error learnings

### search <query>

Search across all learning categories for matching keywords.

**Examples:**

- `/amplihack:learnings search import` - Find learnings about imports
- `/amplihack:learnings search circular dependency` - Multi-word search

### add

Interactively add a new learning. You will be prompted for:

- Category (errors/workflows/tools/architecture/debugging)
- Keywords (comma-separated)
- Summary (one sentence)
- Insight (detailed explanation)
- Example (optional code)

### stats

Show learning statistics:

- Total learnings per category
- Most used learnings
- Recently added learnings
- Average confidence scores

## Execution

When this command is invoked:

1. **Parse action and query** from arguments
2. **Load learning files** from `.claude/data/learnings/`
3. **Execute requested action**:

### For `show`:

```python
# Read all YAML files in learnings directory
# Filter by category if specified
# Format learnings as readable markdown table

For each learning:
Display:
- ID and category
- Keywords (comma-separated)
- Summary
- Confidence score
- Times used
```

### For `search`:

```python
# Extract keywords from query
# Search across all category files
# Score matches by keyword overlap
# Return sorted results with context

For each match:
Display:
- Category and ID
- Match score (percentage)
- Summary
- Full insight (if high score)
```

### For `add`:

```python
# Ask user for category
# Ask for keywords (suggest based on context)
# Ask for one-sentence summary
# Ask for detailed insight
# Ask for example (optional)
# Generate unique ID
# Append to appropriate YAML file
# Update last_updated timestamp
```

### For `stats`:

```python
# Load all learning files
# Calculate:
# - Count per category
# - Total learnings
# - Average confidence
# - Most used (by times_used)
# - Recently added (last 5 by created date)
# Display formatted statistics
```

## Output Format

### Show Output

```markdown
## Learnings: [Category or All]

| ID | Keywords | Summary | Confidence |
| ------- | ---------------- | ---------------------------------- | ---------- |
| err-001 | import, circular | Circular imports cause ImportError | 0.9 |
| wf-002 | git, worktree | Use worktrees for parallel work | 0.85 |

**Total:** X learnings across Y categories
```

### Search Output

```markdown
## Search Results for: "[query]"

### 1. [Category]: [Summary] (Match: 85%)

**Keywords:** import, circular, dependency
**Insight:** [First 200 chars]...

### 2. [Category]: [Summary] (Match: 60%)

...

**Found:** X matching learnings
```

### Stats Output

```markdown
## Learning Statistics

| Category | Count | Avg Confidence |
| ------------ | ----- | -------------- |
| errors | 12 | 0.82 |
| workflows | 8 | 0.78 |
| tools | 5 | 0.85 |
| architecture | 3 | 0.90 |
| debugging | 7 | 0.75 |

**Total:** 35 learnings

### Most Used

1. err-003: "Circular imports cause ImportError" (used 15 times)
2. wf-001: "Use pre-commit before push" (used 12 times)

### Recently Added

1. dbg-007: "Check Docker logs first" (2025-11-25)
2. tool-004: "Use --verbose for debugging" (2025-11-24)
```

## Related Files

- `.claude/data/learnings/errors.yaml` - Error patterns
- `.claude/data/learnings/workflows.yaml` - Workflow insights
- `.claude/data/learnings/tools.yaml` - Tool patterns
- `.claude/data/learnings/architecture.yaml` - Design decisions
- `.claude/data/learnings/debugging.yaml` - Debug strategies
- `.claude/skills/session-learning/SKILL.md` - Full skill documentation
44 changes: 44 additions & 0 deletions .claude/data/learnings/_stats.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Session Learning Statistics
# Auto-generated file tracking learning system usage metrics
# Updated automatically when learnings are injected or extracted

stats:
# Total learnings across all categories
total_learnings: 0

# Learnings per category
by_category:
errors: 0
workflows: 0
tools: 0
architecture: 0
debugging: 0

# Injection statistics
injections:
total: 0
sessions_with_injection: 0
avg_learnings_per_session: 0.0

# Extraction statistics
extractions:
total: 0
sessions_with_extraction: 0

# Effectiveness metrics
effectiveness:
# How often injected learnings were marked as helpful
helpful_rate: 0.0
# Learnings that have been used 3+ times
proven_learnings: 0

# Last updated timestamp
last_updated: "2025-11-25T00:00:00Z"

# Usage tracking by date (rolling 30 days)
daily_usage: []
# Example entry:
# - date: "2025-11-25"
# injections: 2
# extractions: 1
# helpful_count: 1
73 changes: 73 additions & 0 deletions .claude/data/learnings/architecture.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Cross-Session Learning: Architecture Decisions
# This file stores architecture decisions, trade-offs, and design insights.

category: architecture
description: Architecture decisions, design trade-offs, and structural insights
last_updated: "2025-11-25T00:00:00Z"

learnings:
# Real example: Brick philosophy for modules
- id: "arch-001"
created: "2025-11-25T00:00:00Z"
keywords:
- "module"
- "brick"
- "design"
- "interface"
- "public"
summary: "Design modules as self-contained bricks with explicit public interfaces"
insight: |
Following the brick philosophy: each module should be a self-contained
unit with a clear public interface defined via __all__. This enables:
- Independent testing without mocking internal details
- Easy regeneration from specifications
- Clear contracts for other modules to depend on

The "studs" (public API) should be stable while internal implementation
can change freely.
example: |
# module/__init__.py - defines the public interface
from .core import process, validate
from .models import InputModel, OutputModel

__all__ = ["process", "validate", "InputModel", "OutputModel"]

# Internal functions stay private (not in __all__)
# _internal_helper() in core.py is not exported
confidence: 0.95
times_used: 0

# Real example: Skills vs scenarios distinction
- id: "arch-002"
created: "2025-11-25T00:00:00Z"
keywords:
- "skill"
- "scenario"
- "tool"
- "claude"
- "capability"
summary: "Skills are Claude capabilities, scenarios are executable tools"
insight: |
When user asks for "a tool", distinguish between:

1. **Skills** (.claude/skills/): Markdown docs that give Claude new
capabilities. Loaded automatically when relevant. NOT executable.

2. **Scenarios** (.claude/scenarios/): Actual executable Python tools
that users run via Makefile or command line.

The pattern: Build the executable tool first (scenario), optionally
add a skill that knows how to use it effectively.
example: |
# User says: "Create a PDF analysis tool"

# Step 1: Build executable scenario
.claude/scenarios/pdf-analyzer/
tool.py # The actual tool
tests/ # Tests
README.md # Usage docs

# Step 2: Optionally add skill for Claude
.claude/skills/pdf/SKILL.md # Tells Claude how to use the tool
confidence: 0.9
times_used: 0
69 changes: 69 additions & 0 deletions .claude/data/learnings/debugging.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# Cross-Session Learning: Debugging Strategies
# This file stores debugging strategies, root cause patterns, and diagnostic techniques.

category: debugging
description: Debugging strategies, root cause analysis, and diagnostic techniques
last_updated: "2025-11-25T00:00:00Z"

learnings:
# Real example: Isolating import issues
- id: "dbg-001"
created: "2025-11-25T00:00:00Z"
keywords:
- "import"
- "debug"
- "isolate"
- "python"
- "module"
summary: "Use 'python -c' to isolate import errors from other code"
insight: |
When imports fail mysteriously (especially in larger codebases),
isolate the problem by testing imports in a fresh Python process:

`python -c "import module_name"`

This removes interference from other imports, cached modules, or
runtime state. The error message will be cleaner and point directly
to the actual problem.
example: |
# Test a single import in isolation
python -c "import problematic_module"

# Test a specific submodule
python -c "from package.submodule import function"

# Check the import path
python -c "import sys; print(sys.path)"
confidence: 0.95
times_used: 0

# Real example: Hook debugging strategy
- id: "dbg-002"
created: "2025-11-25T00:00:00Z"
keywords:
- "hook"
- "claude"
- "debug"
- "log"
- "session"
summary: "Debug Claude Code hooks by checking runtime logs first"
insight: |
When hooks don't behave as expected:
1. Check .claude/runtime/logs/ for session-specific logs
2. Look for hook execution timestamps and outputs
3. Verify hook is registered in settings.json
4. Test hook function independently before integration

Hooks fail silently by design (to not break sessions), so logging
is the primary debugging mechanism.
example: |
# Check recent session logs
ls -lt .claude/runtime/logs/ | head -5

# View specific session log
cat .claude/runtime/logs/2025-11-25_123456/hook_output.log

# Test hook function directly
python -c "from hook_module import stop_hook; stop_hook(session_data)"
confidence: 0.85
times_used: 0
Loading
Loading