This directory contains example implementations showing how to use the Agent Memory Server
with the agent_memory_client SDK. Examples use LangChain's modern create_agent API
(LangGraph-based) for agent orchestration.
The examples require additional dependencies not included in the base installation. Install them with:
uv sync --group examplesOr if you're using pip:
pip install openai langchain langchain-core langchain-openai langchain-community langgraph python-dotenv tavily-python redisAll examples require:
OPENAI_API_KEY- Required for OpenAI ChatGPT
Some examples use additional environment variables documented in their sections below.
Examples expect the Agent Memory Server to be running locally with auth disabled:
DISABLE_AUTH=true uv run agent-memory apiOr using Docker:
docker-compose upA comprehensive travel assistant that demonstrates:
- Automatic Tool Discovery: Uses
MemoryAPIClient.get_all_memory_tool_schemas()to automatically discover and integrate all available memory tools - Unified Tool Resolution: Leverages
client.resolve_tool_call()to handle all memory tool calls uniformly across different LLM providers - Working Memory Management: Session-based conversation state and structured memory storage
- Long-term Memory: Persistent memory storage with semantic, keyword, and hybrid search capabilities
- Optional Web Search: Cached web search using Tavily API with Redis caching
The travel agent automatically discovers and uses all memory tools available from the client:
- search_memory - Search memories using
semantic,keyword, orhybridsearch modes - get_or_create_working_memory - Check current working memory session
- lazily_create_long_term_memory - Store memories that will be promoted to long-term storage later
- update_working_memory_data - Store/update session-specific data like trip plans
- eagerly_create_long_term_memory - Create long-term memories directly for immediate storage
- get_long_term_memory - Retrieve specific memories by ID
- edit_long_term_memory - Update existing memories
- delete_long_term_memories - Remove memories
- get_current_datetime - Get current date/time for temporal context
Plus optional:
- web_search - Search the internet for current travel information (requires TAVILY_API_KEY)
# Basic usage
python travel_agent.py
# With custom session
python travel_agent.py --session-id my_trip --user-id john_doe
# With custom memory server
python travel_agent.py --memory-server-url http://localhost:8001OPENAI_API_KEY- Required for OpenAI ChatGPTTAVILY_API_KEY- Optional for web search functionalityMEMORY_SERVER_URL- Memory server URL (default: http://localhost:8000)REDIS_URL- Redis URL for caching (default: redis://localhost:6379)
- Tool Auto-Discovery: Uses the client's built-in tool management for maximum compatibility
- Provider Agnostic: Tool resolution works with OpenAI, Anthropic, and other LLM providers
- Error Handling: Robust error handling for tool calls and network issues
- Logging: Comprehensive logging shows which tools are available and being used
A conversational assistant that demonstrates the memory prompt feature:
- Memory Prompt Integration: Uses
client.memory_prompt()to automatically retrieve relevant memories - Context-Aware Responses: Combines system prompt with memory-enriched context
- Simplified Memory Management: No manual history management - memories are automatically retrieved
- Personalized Interactions: Provides contextual responses based on conversation history
- Store Messages: All user and assistant messages are stored in working memory
- Memory Prompt: For each turn,
memory_prompt()retrieves relevant context from both working memory and long-term memories - Enriched Context: The memory prompt results are combined with the system prompt
- LLM Generation: The enriched context is sent to the LLM for response generation
# Basic usage
python memory_prompt_agent.py
# With custom session
python memory_prompt_agent.py --session-id my_session --user-id jane_doe
# With custom memory server
python memory_prompt_agent.py --memory-server-url http://localhost:8001OPENAI_API_KEY- Required for OpenAI ChatGPTMEMORY_SERVER_URL- Memory server URL (default: http://localhost:8000)
- Automatic Memory Retrieval: Uses
memory_prompt()to get relevant memories without manual management - Context Enrichment: Combines system prompt with formatted memory context
- Simplified Flow: No function calling - just enriched prompts for more contextual responses
- Personalization: Naturally incorporates user preferences and past conversations
A conversational assistant that demonstrates comprehensive memory editing capabilities:
- Memory Editing Workflow: Complete lifecycle of creating, searching, editing, and deleting memories through natural conversation
- All Memory Tools: Utilizes all available memory management tools including the new editing capabilities
- Realistic Scenarios: Shows common patterns like correcting information, updating preferences, and managing outdated data
- Interactive Demo: Both automated demo and interactive modes for exploring memory editing
The memory editing agent uses all memory tools to demonstrate comprehensive memory management:
- search_memory - Find existing memories using
semantic,keyword, orhybridsearch modes - get_long_term_memory - Retrieve specific memories by ID for detailed review
- lazily_create_long_term_memory - Store memories that will be promoted to long-term storage later
- eagerly_create_long_term_memory - Create long-term memories directly for immediate storage
- edit_long_term_memory - Update existing memories with corrections or new information
- delete_long_term_memories - Remove memories that are no longer relevant or accurate
- get_or_create_working_memory - Check current working memory session
- update_working_memory_data - Store session-specific data
- get_current_datetime - Get current date/time for temporal context
- Corrections: "Actually, I work at Microsoft, not Google" → Search for job memory, edit company name
- Updates: "I got promoted to Senior Engineer" → Find job memory, update title and add promotion date
- Preference Changes: "I prefer tea over coffee now" → Search beverage preferences, update from coffee to tea
- Life Changes: "I moved to Seattle" → Find location memories, update address/city information
- Information Cleanup: "Delete that old job information" → Search and remove outdated employment data
# Interactive mode (default)
python memory_editing_agent.py
# Automated demo showing memory editing scenarios
python memory_editing_agent.py --demo
# With custom session
python memory_editing_agent.py --session-id my_session --user-id alice
# With custom memory server
python memory_editing_agent.py --memory-server-url http://localhost:8001OPENAI_API_KEY- Required for OpenAI ChatGPTMEMORY_SERVER_URL- Memory server URL (default: http://localhost:8000)
- Memory-First Approach: Always searches for existing memories before creating new ones to avoid duplicates
- Intelligent Updates: Provides context-aware suggestions for editing vs creating new memories
- Error Handling: Robust handling of memory operations with clear user feedback
- Natural Conversation: Explains memory actions as part of natural dialogue flow
- Comprehensive Coverage: Demonstrates all memory CRUD operations through realistic conversation patterns
The automated demo shows a realistic conversation where the agent:
- Initial Information: User shares basic profile information (name, job, preferences)
- Corrections: User corrects previously shared information (job company change)
- Updates: User provides updates to existing information (promotion, new title)
- Multiple Changes: User updates multiple pieces of information at once (location, preferences)
- Information Retrieval: User asks what the agent remembers to verify updates
- Ongoing Updates: User continues to update information (new job level)
- Memory Management: User requests specific memory operations (show/delete specific memories)
This example provides a complete reference for implementing memory editing in conversational AI applications.
A functional tutor: runs quizzes, stores results as episodic memories, tracks weak concepts as semantic memories, suggests next practice, and summarizes recent activity. Uses create_agent (LangGraph) for tool-calling agent orchestration.
python ai_tutor.py --demo
python ai_tutor.py --user-id student --session-id s1- Episodic: Per-question results with
event_dateandtopics=["quiz", topic, concept] - Semantic: Weak concepts tracked with
topics=["weak_concept", topic, concept] - Guidance:
practice-nextandsummarycommands
Shows how to integrate agent_memory_client tools with LangChain's create_agent API (LangGraph-based).
- Tool Schema Discovery: Uses
get_all_memory_tool_schemas()to auto-discover memory tools - LangGraph Agent: Creates agents with
create_agentandMemorySavercheckpointer - Hybrid Search: Demonstrates
semantic,keyword, andhybridsearch modes - State Persistence: Uses
MemorySaverfor multi-turn conversation state
python langchain_integration_example.pyOPENAI_API_KEY- Required for OpenAI ChatGPTMEMORY_SERVER_URL- Memory server URL (default: http://localhost:8000)
Demonstrates the recent_messages_limit parameter for controlling how many recent messages are returned when retrieving working memory.
- Message Limiting: Shows how
recent_messages_limitcaps the number of messages returned - Recent Messages Window: Returns up to N most recent messages in chronological order (oldest to newest)
- Data Preservation: Context and structured data are always returned regardless of message limit
- Client SDK Usage: Uses
agent_memory_clientfor working memory andhttpxfor the limit parameter
python recent_messages_limit_demo.py- Retrieve all messages (no limit)
- Retrieve last N messages (limit < total)
- Retrieve with limit > total (returns all)
- Verify context/data preserved with message limiting
MEMORY_SERVER_URL- Memory server URL (default: http://localhost:8000)