Skip to content

Commit 822e711

Browse files
GeneAIGeneAI
authored andcommitted
docs: Update marketing drafts for v2.3.0 with ModelRouter feature
1 parent 87916cf commit 822e711

File tree

3 files changed

+93
-41
lines changed

3 files changed

+93
-41
lines changed

docs/marketing/drafts/DEVTO_ARTICLE.md

Lines changed: 28 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ cover_image:
1010

1111
Every conversation with Claude starts from scratch. Tell it you prefer concise code examples, and next session? It's forgotten.
1212

13-
Here's how to fix that.
13+
Here's how to fix that—plus save 80% on API costs.
1414

1515
## The Problem
1616

@@ -113,17 +113,42 @@ On a real codebase (364 debt items, 81 security findings):
113113
- **Security noise reduction**: 84% (81 → 13 findings after learning)
114114
- **Tech debt tracking**: Trajectory predicts 2x growth in 170 days
115115

116+
## NEW in v2.3: Smart Model Routing (80% Cost Savings)
117+
118+
Why pay Opus prices for simple tasks? The new ModelRouter automatically picks the right model:
119+
120+
```python
121+
llm = EmpathyLLM(
122+
provider="anthropic",
123+
enable_model_routing=True # NEW!
124+
)
125+
126+
# Summarization → Haiku ($0.25/M tokens)
127+
await llm.interact(user_id="dev", user_input="Summarize this", task_type="summarize")
128+
129+
# Code generation → Sonnet ($3/M tokens)
130+
await llm.interact(user_id="dev", user_input="Write a function", task_type="generate_code")
131+
132+
# Architecture → Opus ($15/M tokens)
133+
await llm.interact(user_id="dev", user_input="Design the system", task_type="architectural_decision")
134+
```
135+
136+
**Cost comparison on real workload:**
137+
- Without routing (all Opus): $4.05/complex task
138+
- With routing (tiered): $0.83/complex task
139+
- **Savings: 80%**
140+
116141
## Get Started
117142

118143
```bash
119144
pip install empathy-framework
120145
```
121146

122147
**Resources:**
123-
- [GitHub](https://github.com/Smart-AI-Memory/empathy-framework) - 500+ downloads day 1
148+
- [GitHub](https://github.com/Smart-AI-Memory/empathy-framework)
124149
- [Documentation](https://www.smartaimemory.com/docs)
125150
- [Live Demo](https://www.smartaimemory.com/tools/debug-wizard)
126151

127152
---
128153

129-
*What would you build with an AI that remembers? Drop a comment below.*
154+
*What would you build with an AI that remembers—and costs 80% less? Drop a comment below.*

docs/marketing/drafts/REDDIT_POSTS.md

Lines changed: 26 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
## r/ClaudeAI
66

7-
**Title:** I built a persistent memory layer for Claude - open source
7+
**Title:** I built a persistent memory layer for Claude + smart model routing (80% cost savings)
88

99
**Body:**
1010

@@ -26,49 +26,60 @@ await llm.interact(
2626

2727
Next session, Claude remembers.
2828

29+
**v2.3 just shipped with ModelRouter** - automatically picks Haiku/Sonnet/Opus based on task complexity. Real savings: $4.05 → $0.83 per complex task (80% reduction).
30+
31+
```python
32+
llm = EmpathyLLM(provider="anthropic", enable_model_routing=True)
33+
await llm.interact(user_id="dev", user_input="Summarize this", task_type="summarize") # → Haiku
34+
```
35+
2936
**Features:**
3037
- Cross-session memory persistence
3138
- Per-user isolation
3239
- Privacy controls (clear/forget)
3340
- Five "empathy levels" from reactive to anticipatory
41+
- **NEW:** Smart model routing (80% cost savings)
3442

35-
Just hit PyPI: `pip install empathy-framework`
43+
On PyPI: `pip install empathy-framework`
3644

37-
Working on getting it into the Anthropic Cookbook. Happy to answer questions.
45+
Happy to answer questions.
3846

3947
---
4048

4149
## r/Python
4250

43-
**Title:** empathy-framework: Add persistent memory to LLMs in Python
51+
**Title:** empathy-framework v2.3: Persistent LLM memory + smart model routing (80% cost savings)
4452

4553
**Body:**
4654

47-
Released v2.2.7 of [empathy-framework](https://pypi.org/project/empathy-framework/) - a Python library that adds persistent, cross-session memory to LLM interactions.
55+
Just released v2.3 of [empathy-framework](https://pypi.org/project/empathy-framework/) - a Python library that adds persistent memory to LLM interactions, plus automatic model routing for cost optimization.
4856

4957
```python
5058
from empathy_llm_toolkit import EmpathyLLM
5159

52-
llm = EmpathyLLM(provider="anthropic", memory_enabled=True)
60+
llm = EmpathyLLM(
61+
provider="anthropic",
62+
memory_enabled=True,
63+
enable_model_routing=True # NEW in v2.3
64+
)
5365

5466
# Memory survives across sessions
5567
await llm.interact(user_id="user123", user_input="Remember I prefer async/await")
56-
```
57-
58-
**Why I built it:**
5968

60-
Most LLM APIs are stateless. Great for simple queries, but if you're building:
61-
- Dev assistants that learn your style
62-
- Customer support with history
63-
- Personal tools that adapt
69+
# Automatic model selection based on task
70+
await llm.interact(user_id="user123", user_input="Summarize this", task_type="summarize") # → Haiku
71+
```
6472

65-
...you need persistent context.
73+
**What's new in v2.3:**
74+
- **ModelRouter**: Auto-picks Haiku/Sonnet/Opus based on task complexity
75+
- Real cost savings: $4.05 → $0.83 per complex task (80% reduction)
6676

67-
**Features:**
77+
**Core features:**
6878
- Works with Claude, OpenAI, local models
6979
- Per-user memory isolation
7080
- Privacy controls built in
7181
- Async-first design
82+
- Five "empathy levels" from reactive to anticipatory
7283

7384
GitHub: https://github.com/Smart-AI-Memory/empathy-framework
7485

docs/marketing/drafts/TWITTER_THREAD.md

Lines changed: 39 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -4,28 +4,28 @@ Copy each numbered item as a separate tweet.
44

55
---
66

7-
**1/7**
8-
What if Claude remembered your preferences across sessions?
7+
**1/8**
8+
What if Claude remembered your preferences across sessions—and cost 80% less?
99

10-
I built @empathy_framework to give LLMs persistent memory.
10+
Just shipped empathy-framework v2.3 with smart model routing.
1111

1212
pip install empathy-framework
1313

1414
🧵
1515

1616
---
1717

18-
**2/7**
18+
**2/8**
1919
The problem: Every Claude conversation starts fresh.
2020

2121
Tell it you prefer concise code? Forgotten next session.
2222

23-
Working on a project? Context lost.
23+
And you're paying Opus prices for simple tasks.
2424

2525
---
2626

27-
**3/7**
28-
The fix:
27+
**3/8**
28+
The fix - persistent memory:
2929

3030
```python
3131
from empathy_llm_toolkit import EmpathyLLM
@@ -45,7 +45,25 @@ That preference now survives.
4545

4646
---
4747

48-
**4/7**
48+
**4/8**
49+
NEW in v2.3 - ModelRouter:
50+
51+
```python
52+
llm = EmpathyLLM(
53+
provider="anthropic",
54+
enable_model_routing=True
55+
)
56+
57+
# Summarize → Haiku ($0.25/M)
58+
# Code gen → Sonnet ($3/M)
59+
# Architecture → Opus ($15/M)
60+
```
61+
62+
Real savings: $4.05 → $0.83 per task (80%)
63+
64+
---
65+
66+
**5/8**
4967
It tracks:
5068
→ User preferences
5169
→ Project context
@@ -55,7 +73,7 @@ Each user gets isolated memory. Privacy controls built in.
5573

5674
---
5775

58-
**5/7**
76+
**6/8**
5977
Five empathy levels:
6078

6179
1. Reactive (standard)
@@ -66,7 +84,7 @@ Five empathy levels:
6684

6785
---
6886

69-
**6/7**
87+
**7/8**
7088
Now on PyPI:
7189

7290
pip install empathy-framework
@@ -77,35 +95,32 @@ Docs: smartaimemory.com/docs
7795

7896
---
7997

80-
**7/7**
81-
Working on getting this into the @AnthropicAI cookbook.
82-
83-
What would you build with an AI that remembers you?
98+
**8/8**
99+
What would you build with an AI that remembers you—and costs 80% less?
84100

85101
---
86102

87103
# Alt: Shorter 4-tweet version
88104

89105
**1/4**
90-
What if Claude remembered you across sessions?
106+
What if Claude remembered you across sessions—and cost 80% less?
91107

92-
Built empathy-framework to add persistent memory to LLMs.
108+
Just shipped empathy-framework v2.3 with smart model routing.
93109

94110
pip install empathy-framework
95111

96112
---
97113

98114
**2/4**
99115
```python
100-
llm = EmpathyLLM(provider="anthropic", memory_enabled=True)
101-
102-
await llm.interact(
103-
user_id="you",
104-
user_input="I prefer concise answers"
116+
llm = EmpathyLLM(
117+
provider="anthropic",
118+
memory_enabled=True,
119+
enable_model_routing=True # NEW!
105120
)
106121
```
107122

108-
Next session? Still remembered.
123+
Memory persists. Costs drop 80%.
109124

110125
---
111126

@@ -115,10 +130,11 @@ Features:
115130
→ Per-user isolation
116131
→ Privacy controls
117132
→ Five "empathy levels"
133+
→ NEW: Smart model routing (Haiku/Sonnet/Opus auto-selection)
118134

119135
---
120136

121137
**4/4**
122138
GitHub: github.com/Smart-AI-Memory/empathy-framework
123139

124-
What would you build with an AI that remembers?
140+
What would you build with an AI that remembers—and costs 80% less?

0 commit comments

Comments
 (0)