|
1 | | -# Show HN: Empathy Framework – AI collaboration with persistent memory |
| 1 | +# Show HN: Empathy Framework v2.3 – Persistent AI memory + 80% cost savings |
2 | 2 |
|
3 | | -**Title:** Empathy Framework – AI collaboration with persistent memory and multi-agent orchestration |
| 3 | +**Title:** Empathy Framework v2.3 – Persistent AI memory + smart model routing (80% cost savings) |
4 | 4 |
|
5 | | -**URL:** https://github.com/Smart-AI-Memory/empathy |
| 5 | +**URL:** https://github.com/Smart-AI-Memory/empathy-framework |
6 | 6 |
|
7 | 7 | --- |
8 | 8 |
|
9 | | -I've been building AI tools for healthcare and software development. The biggest frustration? Every AI session starts from zero. No memory of what you worked on yesterday, no patterns learned across projects, no coordination between agents. |
| 9 | +I've been building AI tools and got tired of two problems: every session starts from zero, and I was paying Opus prices for simple tasks. |
10 | 10 |
|
11 | | -So I built the Empathy Framework. |
| 11 | +So I built Empathy Framework. v2.3 just shipped with major cost optimization. |
12 | 12 |
|
13 | | -**The five problems it solves:** |
| 13 | +**The 80% cost savings:** |
14 | 14 |
|
15 | | -1. **Stateless** — AI forgets everything between sessions. Empathy has dual-layer memory: git-based pattern storage for long-term knowledge (no infrastructure required), optional Redis for real-time coordination. |
| 15 | +New ModelRouter automatically picks the right model tier: |
16 | 16 |
|
17 | | -2. **Cloud-dependent** — Your code goes to someone else's servers. Empathy runs entirely local-first. Memory lives in your repo, version-controlled like code. |
| 17 | +```python |
| 18 | +from empathy_llm_toolkit import EmpathyLLM |
| 19 | + |
| 20 | +llm = EmpathyLLM(provider="anthropic", enable_model_routing=True) |
| 21 | + |
| 22 | +# Summaries → Haiku ($0.25/M tokens) |
| 23 | +# Code gen → Sonnet ($3/M tokens) |
| 24 | +# Architecture → Opus ($15/M tokens) |
| 25 | +``` |
| 26 | + |
| 27 | +Real numbers: $4.05/task → $0.83/task. That's 80% savings by just using the right model for each task. |
18 | 28 |
|
19 | | -3. **Isolated** — AI tools can't coordinate. Empathy has built-in multi-agent orchestration (Empathy OS) for human↔AI and AI↔AI collaboration. |
| 29 | +**The memory problem:** |
20 | 30 |
|
21 | | -4. **Reactive** — AI waits for you to find problems. Empathy predicts issues 30-90 days ahead using pattern analysis. |
| 31 | +AI forgets everything between sessions. Tell it you prefer type hints? Gone next time. Empathy adds persistent memory: |
| 32 | + |
| 33 | +```python |
| 34 | +llm = EmpathyLLM(provider="anthropic", memory_enabled=True) |
| 35 | + |
| 36 | +await llm.interact( |
| 37 | + user_id="me", |
| 38 | + user_input="I prefer Python with type hints" |
| 39 | +) |
| 40 | +# Survives across sessions |
| 41 | +``` |
22 | 42 |
|
23 | | -5. **Expensive** — Every query costs the same, and you waste tokens re-explaining context. Empathy routes smartly (cheap models detect, capable models decide) AND eliminates repeated context — no more re-teaching your AI what it should already know. |
| 43 | +**New in v2.3:** |
| 44 | + |
| 45 | +1. **ModelRouter** — Automatic Haiku/Sonnet/Opus selection based on task complexity |
| 46 | +2. **`empathy sync-claude`** — Sync learned patterns to Claude Code's `.claude/rules/` directory |
| 47 | +3. **Debug Wizard** — Web UI at empathy-framework.vercel.app/tools/debug-wizard that remembers past bugs |
| 48 | + |
| 49 | +**How the memory works:** |
| 50 | + |
| 51 | +- Git-based pattern storage (no infrastructure needed) |
| 52 | +- Optional Redis for real-time coordination |
| 53 | +- Bug patterns, security decisions, coding preferences all persist |
24 | 54 |
|
25 | 55 | **What's included:** |
26 | 56 |
|
27 | | -- **Code Health Assistant** (`empathy health`) — lint, format, types, tests, security in one command with auto-fix |
28 | | -- **Pattern-based code review** (`empathy review`) — catches bugs before they happen based on your team's history |
29 | | -- Memory Control Panel CLI (`empathy-memory serve`) and REST API |
30 | | -- 30+ production wizards (security, performance, testing, docs, accessibility) |
31 | | -- Agent toolkit to build custom agents that inherit memory and prediction |
32 | | -- Healthcare suite with HIPAA-compliant patterns (SBAR, SOAP notes) |
33 | | -- Works with Claude, GPT-4, Ollama, or your own models |
| 57 | +- `empathy-inspect` — unified code inspection (lint, security, tests, tech debt) |
| 58 | +- SARIF output for GitHub/GitLab code scanning |
| 59 | +- HTML dashboard reports |
| 60 | +- 30+ production wizards (security, performance, testing, docs) |
| 61 | +- Works with Claude, GPT-4, or Ollama |
34 | 62 |
|
35 | 63 | **Quick start:** |
36 | 64 |
|
37 | 65 | ```bash |
38 | 66 | pip install empathy-framework |
39 | | -empathy health # Check code health |
40 | | -empathy health --fix # Auto-fix safe issues |
41 | | -empathy-memory serve # Start memory server |
42 | 67 | ``` |
43 | 68 |
|
44 | | -That's it. Redis auto-starts for real-time features, but long-term pattern storage works with just git — no infrastructure needed for students and individual developers. |
45 | | - |
46 | | -**Example:** |
47 | | - |
48 | 69 | ```python |
49 | | -from empathy_os import EmpathyOS |
50 | | - |
51 | | -os = EmpathyOS() |
52 | | - |
53 | | -result = await os.collaborate( |
54 | | - "Review this deployment pipeline for problems", |
55 | | - context={"code": pipeline_code} |
| 70 | +llm = EmpathyLLM( |
| 71 | + provider="anthropic", |
| 72 | + memory_enabled=True, |
| 73 | + enable_model_routing=True |
56 | 74 | ) |
57 | 75 |
|
58 | | -print(result.current_issues) # What's wrong now |
59 | | -print(result.predicted_issues) # What will break in 30-90 days |
60 | | -print(result.prevention_steps) # How to prevent it |
| 76 | +await llm.interact(user_id="dev", user_input="Review this code") |
61 | 77 | ``` |
62 | 78 |
|
63 | | -**Licensing:** |
64 | | - |
65 | | -Fair Source 0.9 — Free for students, educators, and teams ≤5 employees. Commercial license $99/dev/year. Auto-converts to Apache 2.0 on January 1, 2029. |
| 79 | +**Licensing:** Fair Source 0.9 — Free for students and teams ≤5. $99/dev/year commercial. Auto-converts to Apache 2.0 on Jan 1, 2029. |
66 | 80 |
|
67 | 81 | **What I'm looking for:** |
68 | 82 |
|
69 | | -- Feedback on the memory architecture (git-based patterns + optional Redis) |
70 | | -- Ideas for cross-domain pattern transfer (healthcare insights → software) |
71 | | -- Integration suggestions (CI/CD, IDE, pre-commit hooks?) |
| 83 | +- Feedback on the model routing approach |
| 84 | +- Ideas for other cost optimizations |
| 85 | +- Integration suggestions (CI/CD, pre-commit hooks?) |
| 86 | + |
| 87 | +GitHub: https://github.com/Smart-AI-Memory/empathy-framework |
72 | 88 |
|
73 | | -GitHub: https://github.com/Smart-AI-Memory/empathy |
| 89 | +Live demo: https://empathy-framework.vercel.app/tools/debug-wizard |
74 | 90 |
|
75 | | -Happy to answer questions about the architecture or use cases. |
| 91 | +Happy to answer questions. |
0 commit comments