Skip to content

Commit 09f10bc

Browse files
committed
Add agentic-memory directory with langraph implementation
1 parent 6b5348f commit 09f10bc

File tree

9 files changed

+4316
-1
lines changed

9 files changed

+4316
-1
lines changed

agentic-memory

Lines changed: 0 additions & 1 deletion
This file was deleted.

agentic-memory/README.md

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
# Agent Memory - Can LLMs *Really* Think?
2+
3+
<img src="./media/memory.png" width=600>
4+
5+
*[Cognitive Architectures for Language Agents, 2024](https://arxiv.org/pdf/2309.02427)*
6+
7+
LLMs are considered "stateless" in that every time you invoke an LLM call, it is like the first time it's ever seen the input being passed through. Given this quirk, multi-turn LLM agents have a unique challenge to overcome with fully understanding and navigating a vast world model which we humans do naturally.
8+
9+
Being a human has a lot of advantages over a language model when executing a task. We bring our general knowledge about the world and lived experience, our understanding of prior similar task experiences and their takeaways, what we've specifically learned how to do or been taught, and then our ability to instantly contextualize and shape our approach to a task as we're working through it. In essence, we have advanced memory and the ability to learn from and apply learnings to new experiences.
10+
11+
LLMs sort of have some memory, mostly their general knowledge or traits picked up from training and additional fine tuning but suffer from a lack of the other characteristics outlined prior. To compensate for this, we can model different forms of memory, recall, and learning within our agentic system design. Specifically, we'll create a simple RAG agent to model 4 kinds of memory:
12+
13+
- **Working Memory** - Current conversation and immediate context
14+
- **Episodic Memory** - Historical experiences and their takeaways
15+
- **Semantic Memory** - Knowledge context and factual grounding
16+
- **Procedural Memory** - The "rules" and "skills" for interaction
17+
18+
These four memory systems provide a holistic approach to understanding and architecting a part of cognitive design into an agent application. In this notebook we'll break down each type of memory and an example approach to implementing them into a whole agent experience.

agentic-memory/agentic_memory.py

Lines changed: 197 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,197 @@
1+
from datetime import datetime
2+
from typing import List, Optional, Set
3+
from pydantic import BaseModel, Field
4+
from langchain_openai import ChatOpenAI
5+
from langchain_core.messages import HumanMessage, SystemMessage, AIMessage
6+
from langchain_core.prompts import ChatPromptTemplate
7+
from langchain_core.output_parsers import JsonOutputParser
8+
import weaviate
9+
10+
# Base Message Model
11+
class Message(BaseModel):
12+
"""Base message model for all communications"""
13+
role: str
14+
content: str
15+
timestamp: datetime = Field(default_factory=datetime.now)
16+
17+
# Memory Models
18+
class WorkingMemory(BaseModel):
19+
"""Stores current conversation context and active state"""
20+
messages: List[Message] = []
21+
system_prompt: str = "You are a helpful AI Assistant."
22+
semantic_context: Optional[str] = None
23+
24+
class EpisodicMemory(BaseModel):
25+
"""Stores historical experiences and reflections"""
26+
conversation: str
27+
context_tags: List[str]
28+
conversation_summary: str
29+
what_worked: str
30+
what_to_avoid: str
31+
32+
class SemanticMemory(BaseModel):
33+
"""Stores factual knowledge and information"""
34+
chunk: str
35+
36+
class ProceduralMemory(BaseModel):
37+
"""Stores interaction guidelines and learned behaviors"""
38+
guidelines: List[str]
39+
40+
# Memory Tools
41+
class MemoryTools(BaseModel):
42+
"""Tools for memory operations"""
43+
44+
def format_conversation(self, messages: List[Message]) -> str:
45+
"""Format messages into a readable conversation string"""
46+
conversation = []
47+
for message in messages[1:]: # Skip system message
48+
conversation.append(f"{message.role.upper()}: {message.content}")
49+
return "\n".join(conversation)
50+
51+
def episodic_recall(self, query: str, vdb_client) -> EpisodicMemory:
52+
"""Retrieve relevant episodic memory"""
53+
episodic_memory = vdb_client.collections.get("episodic_memory")
54+
memory = episodic_memory.query.hybrid(
55+
query=query,
56+
alpha=0.5,
57+
limit=1,
58+
)
59+
props = memory.objects[0].properties
60+
return EpisodicMemory(
61+
conversation=props['conversation'],
62+
context_tags=props['context_tags'],
63+
conversation_summary=props['conversation_summary'],
64+
what_worked=props['what_worked'],
65+
what_to_avoid=props['what_to_avoid']
66+
)
67+
68+
def semantic_recall(self, query: str, vdb_client) -> str:
69+
"""Retrieve relevant semantic knowledge"""
70+
coala_collection = vdb_client.collections.get("CoALA_Paper")
71+
memories = coala_collection.query.hybrid(
72+
query=query,
73+
alpha=0.5,
74+
limit=15,
75+
)
76+
combined_text = ""
77+
for i, memory in enumerate(memories.objects):
78+
combined_text += f"\nCHUNK {i+1}:\n"
79+
combined_text += memory.properties['chunk'].strip()
80+
return combined_text
81+
82+
def load_procedural_memory(self) -> ProceduralMemory:
83+
"""Load procedural memory guidelines"""
84+
with open("./procedural_memory.txt", "r") as content:
85+
guidelines = content.read().split("\n")
86+
return ProceduralMemory(guidelines=guidelines)
87+
88+
# Memory Agent
89+
class MemoryAgent(BaseModel):
90+
"""Main agent class integrating all memory types"""
91+
working_memory: WorkingMemory = Field(default_factory=WorkingMemory)
92+
tools: MemoryTools = Field(default_factory=MemoryTools)
93+
llm: ChatOpenAI = Field(default_factory=lambda: ChatOpenAI(temperature=0.7, model="gpt-4"))
94+
vdb_client: Optional[object] = None
95+
96+
def initialize(self, vdb_client):
97+
"""Initialize the agent with vector database client"""
98+
self.vdb_client = vdb_client
99+
100+
def update_system_prompt(self, query: str) -> str:
101+
"""Update system prompt with memory context"""
102+
# Get episodic memory
103+
episodic = self.tools.episodic_recall(query, self.vdb_client)
104+
105+
# Load procedural memory
106+
procedural = self.tools.load_procedural_memory()
107+
108+
# Format system prompt
109+
prompt = f"""You are a helpful AI Assistant. Answer the user's questions to the best of your ability.
110+
You recall similar conversations with the user, here are the details:
111+
112+
Current Conversation Match: {episodic.conversation}
113+
What has worked well: {episodic.what_worked}
114+
What to avoid: {episodic.what_to_avoid}
115+
116+
Use these memories as context for your response to the user.
117+
118+
Additionally, here are guidelines for interactions with the current user:
119+
{' '.join(procedural.guidelines)}"""
120+
121+
return prompt
122+
123+
def get_semantic_context(self, query: str) -> str:
124+
"""Get relevant semantic context"""
125+
context = self.tools.semantic_recall(query, self.vdb_client)
126+
return f"""If needed, Use this grounded context to factually answer the next question.
127+
Let me know if you do not have enough information or context to answer a question.
128+
129+
{context}
130+
"""
131+
132+
def process_message(self, user_input: str) -> str:
133+
"""Process user message and generate response"""
134+
# Update system prompt
135+
system_prompt = self.update_system_prompt(user_input)
136+
system_message = SystemMessage(content=system_prompt)
137+
138+
# Get semantic context
139+
semantic_context = self.get_semantic_context(user_input)
140+
semantic_message = HumanMessage(content=semantic_context)
141+
142+
# Create user message
143+
user_message = HumanMessage(content=user_input)
144+
145+
# Update working memory
146+
self.working_memory.messages = [
147+
system_message,
148+
*[msg for msg in self.working_memory.messages if not isinstance(msg, SystemMessage)],
149+
semantic_message,
150+
user_message
151+
]
152+
153+
# Generate response
154+
response = self.llm.invoke(self.working_memory.messages)
155+
156+
# Add response to working memory
157+
self.working_memory.messages.append(response)
158+
159+
return response.content
160+
161+
def save_episodic_memory(self):
162+
"""Save conversation to episodic memory"""
163+
conversation = self.tools.format_conversation(self.working_memory.messages)
164+
165+
# Create reflection using LLM
166+
reflection_prompt = ChatPromptTemplate.from_template("""
167+
You are analyzing conversations to create memories that will help guide future interactions.
168+
Review the conversation and create a memory reflection following these rules:
169+
1. For any field where you don't have enough information, use "N/A"
170+
2. Be extremely concise - each string should be one clear, actionable sentence
171+
3. Focus only on information that would be useful for future conversations
172+
4. Context_tags should be specific enough to match similar situations but general enough to be reusable
173+
174+
Output valid JSON in exactly this format:
175+
{
176+
"context_tags": [string],
177+
"conversation_summary": string,
178+
"what_worked": string,
179+
"what_to_avoid": string
180+
}
181+
182+
Here is the conversation:
183+
{conversation}
184+
""")
185+
186+
reflection = reflection_prompt | self.llm | JsonOutputParser()
187+
memory = reflection.invoke({"conversation": conversation})
188+
189+
# Save to vector database
190+
episodic_memory = self.vdb_client.collections.get("episodic_memory")
191+
episodic_memory.data.insert({
192+
"conversation": conversation,
193+
"context_tags": memory['context_tags'],
194+
"conversation_summary": memory['conversation_summary'],
195+
"what_worked": memory['what_worked'],
196+
"what_to_avoid": memory['what_to_avoid'],
197+
})

0 commit comments

Comments
 (0)