-
Notifications
You must be signed in to change notification settings - Fork 154
Description
I am able to successfully write long-term memory using create_memory_store_manager, but cannot retrieve it using create_search_memory_tool or agent recall.
The agent consistently responds that it has no memory, even though memory extraction runs without errors.
Environment
LangMem: latest
LangGraph: v1.x
LangChain: latest
Python: 3.x
Store: PostgresStore
Code :
from langmem import create_manage_memory_tool, create_search_memory_tool
from langchain.agents import create_agent
from langmem import ReflectionExecutor, create_memory_store_manager
from langgraph.func import entrypoint
from postgresdb import close_pool, get_checkpointer,get_store
from usertable import UserProfile
from memory import llm
1. Create a specialized Memory Manager
This is a background worker dedicated to extracting our UserProfile schema
memory_manager = create_memory_store_manager(
llm,
namespace=("profiles", "{user_id}"),
schemas=[UserProfile],
instructions="Extract user traits and update the profile."
)
2. Wrap it in the Executor
This handles the scheduling and debouncing logic
executor = ReflectionExecutor(memory_manager)
def create_persistent_agent():
# 1. Initialize Infrastructure
checkpointer = get_checkpointer()
store = get_store()
# 2. Configure Agent Tools
# The agent gets tools to manually search or update memory if needed
user_namespace = ("profiles", "{user_id}")
tools = [
create_manage_memory_tool(namespace=user_namespace, schema=UserProfile),
create_search_memory_tool(namespace=user_namespace)
]
# 3. Create the Graph
# We pass BOTH the checkpointer (for thread state) and store (for knowledge)
agent_graph = create_agent(
llm,
tools=tools,
checkpointer=checkpointer,
store=store
)
return agent_graph
--- The Chat Entrypoint ---
Note: entrypoint decorator manages the store injection
entrypoint(store=get_store())
def chat_workflow(message: str, config):
user_id = config["configurable"]["user_id"]
agent = create_persistent_agent()
# 1. Generate Response (Hot Path - Fast)
response = agent.invoke({"messages": [{"role": "user", "content": message}]}, config)
ai_msg = response["messages"][-1].content
# 2. Schedule Memory Update (Background Path)
# The executor handles the logic of when to process
to_process = {
"messages": [
{"role": "user", "content": message},
{"role": "assistant", "content": ai_msg}
],
"user_id": user_id
}
memory_manager.invoke(to_process)
return ai_msg
if name == "main":
try:
# Example Run
config = {"configurable": {"user_id": "u123", "thread_id": "t466"}}
print("--- Interaction 1 ---")
response = chat_workflow.invoke("Search your memory and tell me what you know about me", config)
print(f"Agent: {response}")
finally:
# Ensure cleanup
close_pool()
Expected Behavior:
The agent should recall stored user traits
create_search_memory_tool should retrieve schema-based memory
Memory should be independent of thread_id
Actual Behavior
Agent responds:
“I don’t have any information about you yet”
Memory is written but not retrievable
Search tool returns no results
Suspected Issue:
Schema-based memory (create_memory_store_manager) is not indexed in a way that create_search_memory_tool can retrieve
There appears to be no automatic memory hydration into agent context
Documentation suggests this should work, but it does not in practice
Question:
Is create_search_memory_tool expected to work with schema-based memory?
Is additional configuration required to make structured memory searchable?
Is auto-loading of long-term memory into the agent planned?