Skip to content

Commit 45ca38f

Browse files
GeneAIGeneAI
authored andcommitted
Merge experimental/v4.0-meta-orchestration into main
2 parents e127df0 + e1f1b5d commit 45ca38f

File tree

324 files changed

+163583
-1854
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

324 files changed

+163583
-1854
lines changed

.claude/commands/cache.md

Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
Manage the hybrid caching system for LLM cost savings.
2+
3+
## Cache Architecture
4+
5+
The Empathy Framework uses a hybrid cache:
6+
- **Hash Cache**: Exact match on prompt hash (fast, precise)
7+
- **Semantic Cache**: Similarity-based matching (handles variations)
8+
- **Dependency-aware**: Invalidates when source files change
9+
10+
## Commands
11+
12+
### 1. View Cache Statistics
13+
```bash
14+
empathy cache stats
15+
```
16+
17+
Shows:
18+
- Total entries
19+
- Hit rate (target: 70%+)
20+
- Size on disk
21+
- Oldest/newest entry
22+
23+
### 2. Analyze Cache Effectiveness
24+
```bash
25+
empathy cache analyze
26+
```
27+
28+
Shows:
29+
- Hit rate by workflow type
30+
- Most frequently cached queries
31+
- Cache miss patterns
32+
- Estimated cost savings
33+
34+
### 3. Clear Cache
35+
```bash
36+
# Clear all
37+
empathy cache clear
38+
39+
# Clear specific workflow
40+
empathy cache clear --workflow code-review
41+
42+
# Clear entries older than N days
43+
empathy cache clear --older-than 7d
44+
```
45+
46+
### 4. Warm Cache
47+
```bash
48+
# Pre-populate cache for common queries
49+
empathy cache warm --workflows code-review,bug-predict
50+
```
51+
52+
### 5. Configure Cache
53+
In `empathy.config.yml`:
54+
```yaml
55+
cache:
56+
enabled: true
57+
type: "hybrid" # hash, semantic, or hybrid
58+
max_size_mb: 100 # Maximum cache size
59+
ttl_hours: 24 # Time-to-live for entries
60+
similarity_threshold: 0.85 # For semantic matching
61+
```
62+
63+
## Cache Performance Targets
64+
65+
| Metric | Target | Action if Below |
66+
|--------|--------|-----------------|
67+
| Hit Rate | >70% | Increase TTL, check query patterns |
68+
| Avg Latency | <50ms | Check disk I/O, reduce cache size |
69+
| Size | <100MB | Prune old entries |
70+
71+
## Monitoring
72+
73+
```bash
74+
# Real-time cache monitoring
75+
empathy cache monitor
76+
77+
# Export cache metrics
78+
empathy cache export-metrics --format json
79+
```
80+
81+
## Output
82+
83+
Provide cache health summary:
84+
- Hit Rate: XX% (target: 70%+)
85+
- Entries: X
86+
- Size: X MB / 100 MB limit
87+
- Estimated savings: $X.XX
88+
- Recommendation: [OK / Needs attention]

.claude/commands/cost-report.md

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
Analyze LLM API costs and show savings from intelligent tier routing.
2+
3+
## Leverages Existing Features
4+
5+
The Empathy Framework has powerful cost tracking built-in:
6+
- `empathy telemetry show` - Recent API calls with costs
7+
- `empathy telemetry savings` - Savings analysis vs premium baseline
8+
- `CostTracker` - Automatic cost logging per request
9+
10+
## Execution Steps
11+
12+
### 1. Show Recent Usage
13+
```bash
14+
empathy telemetry show --limit 50
15+
```
16+
17+
### 2. Calculate Savings
18+
```bash
19+
empathy telemetry savings --days 30
20+
```
21+
22+
### 3. Analyze by Workflow
23+
```bash
24+
# If available, show cost breakdown by workflow type
25+
empathy telemetry breakdown 2>/dev/null || echo "Breakdown not available"
26+
```
27+
28+
### 4. Check Cache Effectiveness
29+
```bash
30+
empathy cache stats 2>/dev/null || echo "Cache stats: check .empathy/cache/"
31+
```
32+
33+
## Analysis to Provide
34+
35+
After running the commands, analyze and report:
36+
37+
### Cost Summary
38+
| Metric | Value |
39+
|--------|-------|
40+
| Total Spend (30d) | $X.XX |
41+
| Baseline (if all premium) | $X.XX |
42+
| **Your Savings** | $X.XX (XX%) |
43+
| Cache Savings | $X.XX |
44+
45+
### Tier Distribution
46+
Show percentage of calls by tier:
47+
- CHEAP (Haiku): XX% - outlines, summaries
48+
- CAPABLE (Sonnet): XX% - code review, generation
49+
- PREMIUM (Opus): XX% - complex analysis
50+
51+
### Optimization Recommendations
52+
53+
Based on the data, suggest:
54+
1. **Under-utilizing cheap tier?** - Some tasks could use Haiku
55+
2. **High cache miss rate?** - Adjust cache TTL or size
56+
3. **Expensive workflows?** - Identify cost hotspots
57+
4. **Cost trending up?** - Alert on unusual patterns
58+
59+
### Cost Projection
60+
- Daily average: $X.XX
61+
- Projected monthly: $X.XX
62+
- At current rate, daily limit ($10) reached in: X days
63+
64+
Keep output concise with actionable insights.

.claude/commands/create-agent.md

Lines changed: 77 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
# Create Custom Agent - Socratic Guide
2+
3+
You are helping the user create a custom AI agent for the Empathy Framework. Use the AskUserQuestion tool to gather requirements through a guided conversation.
4+
5+
## Step 1: Understand the Purpose
6+
7+
First, ask the user what they want their agent to do:
8+
9+
Use AskUserQuestion with:
10+
- Question: "What should this agent do?"
11+
- Header: "Purpose"
12+
- Options:
13+
- "Analyze code" - Review and analyze source code for issues or patterns
14+
- "Generate content" - Create tests, documentation, or other content
15+
- "Review & validate" - Check work for quality, security, or correctness
16+
- "Transform data" - Convert, migrate, or restructure code/data
17+
18+
## Step 2: Determine Specialization
19+
20+
Based on their answer, ask about specific focus:
21+
22+
Use AskUserQuestion with:
23+
- Question: "What specific area should this agent focus on?"
24+
- Header: "Focus Area"
25+
- Options vary by purpose (e.g., for "Analyze code"):
26+
- "Security vulnerabilities"
27+
- "Performance issues"
28+
- "Code quality & style"
29+
- "Architecture & design"
30+
31+
## Step 3: Select Model Tier
32+
33+
Use AskUserQuestion with:
34+
- Question: "What quality/cost balance do you need?"
35+
- Header: "Model Tier"
36+
- Options:
37+
- "Cheap (Recommended)" - Fast & low-cost ($0.001-0.01), good for simple analysis
38+
- "Capable" - Balanced ($0.01-0.05), good for most development tasks
39+
- "Premium" - Highest quality ($0.05-0.20), for complex reasoning
40+
41+
## Step 4: Define Success Criteria
42+
43+
Use AskUserQuestion with:
44+
- Question: "How will you measure success?"
45+
- Header: "Success"
46+
- Options:
47+
- "Issues found & reported" - Agent finds and documents problems
48+
- "Content generated" - Agent produces requested output
49+
- "Validation passed" - Agent confirms quality/correctness
50+
- "Recommendations provided" - Agent suggests improvements
51+
52+
## Step 5: Generate the Agent Spec
53+
54+
After gathering all answers, generate the agent specification:
55+
56+
```json
57+
{
58+
"name": "[Generated from purpose + focus]",
59+
"role": "[Description based on answers]",
60+
"tier": "[Selected tier]",
61+
"base_template": "generic",
62+
"success_criteria": "[Selected criteria]",
63+
"tools": []
64+
}
65+
```
66+
67+
Then tell the user:
68+
1. Show the generated spec in a code block
69+
2. Offer to save it: `empathy meta-workflow create-agent -q --name "X" --role "Y" --tier "Z" -o agent-spec.json`
70+
3. Explain how to use it in a team: `empathy meta-workflow create-team`
71+
72+
## Important
73+
74+
- Use AskUserQuestion for EACH step - don't ask multiple questions at once
75+
- Wait for user response before proceeding to next step
76+
- Keep the conversation focused and efficient
77+
- Generate a descriptive agent name based on their choices

.claude/commands/create-team.md

Lines changed: 95 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
# Create Custom Agent Team - Socratic Guide
2+
3+
You are helping the user create a custom AI agent team for their project using the Empathy Framework. Use the AskUserQuestion tool to gather requirements through a guided conversation.
4+
5+
## Step 1: Understand the Team's Mission
6+
7+
Use AskUserQuestion with:
8+
- Question: "What is the team's overall goal?"
9+
- Header: "Team Goal"
10+
- Options:
11+
- "Code quality pipeline" - Review, test, and improve code quality
12+
- "Release preparation" - Prepare code for production deployment
13+
- "Documentation sync" - Keep docs aligned with code
14+
- "Security audit" - Comprehensive security analysis
15+
16+
## Step 2: Determine Team Size
17+
18+
Use AskUserQuestion with:
19+
- Question: "How many agents should be on this team?"
20+
- Header: "Team Size"
21+
- Options:
22+
- "2 agents (Recommended)" - Simple workflow: analyze then act
23+
- "3 agents" - Standard: analyze, execute, validate
24+
- "4 agents" - Comprehensive: analyze, execute, validate, report
25+
- "5 agents" - Full pipeline with multiple specialists
26+
27+
## Step 3: Define Collaboration Pattern
28+
29+
Use AskUserQuestion with:
30+
- Question: "How should agents work together?"
31+
- Header: "Collaboration"
32+
- Options:
33+
- "Sequential (Recommended)" - Each agent waits for the previous one
34+
- "Parallel then merge" - Run analysis in parallel, then synthesize
35+
- "Pipeline" - Output of each feeds into the next
36+
37+
## Step 4: Select Agent Roles
38+
39+
Based on team size and goal, present role options. For a 3-agent team:
40+
41+
Use AskUserQuestion with multiSelect: true:
42+
- Question: "Which roles should your team have? (Select 3)"
43+
- Header: "Roles"
44+
- Options:
45+
- "Analyst" - Examines code/docs and identifies issues
46+
- "Generator" - Creates content (tests, docs, fixes)
47+
- "Reviewer" - Checks quality and correctness
48+
- "Validator" - Verifies results meet criteria
49+
50+
## Step 5: Cost Preference
51+
52+
Use AskUserQuestion with:
53+
- Question: "What's your cost preference for this team?"
54+
- Header: "Cost"
55+
- Options:
56+
- "Minimize cost (Recommended)" - Use cheap tier where possible ($0.03-0.10 per run)
57+
- "Balance cost/quality" - Mix of cheap and capable ($0.10-0.30 per run)
58+
- "Maximize quality" - Use capable/premium tiers ($0.30-0.60 per run)
59+
60+
## Step 6: Generate the Team Template
61+
62+
After gathering all answers, generate the team specification:
63+
64+
```json
65+
{
66+
"id": "[goal-based-id]",
67+
"name": "[Descriptive Team Name]",
68+
"description": "[Based on goal]",
69+
"collaboration_pattern": "[Selected pattern]",
70+
"agents": [
71+
{
72+
"role": "[Role 1]",
73+
"purpose": "[What this agent does]",
74+
"tier": "[Based on cost preference]",
75+
"base_template": "generic"
76+
}
77+
],
78+
"estimated_cost_range": {
79+
"min": 0.03,
80+
"max": 0.30
81+
}
82+
}
83+
```
84+
85+
Then tell the user:
86+
1. Show the generated template in a code block
87+
2. Save it to `.empathy/meta_workflows/templates/[id].json`
88+
3. Explain how to run it: `empathy meta-workflow run [id]`
89+
90+
## Important
91+
92+
- Use AskUserQuestion for EACH step - don't ask multiple questions at once
93+
- Wait for user response before proceeding to next step
94+
- For role selection, use multiSelect: true
95+
- Generate meaningful agent names and purposes based on the goal

0 commit comments

Comments
 (0)