Skip to content

Commit db56765

Browse files
committed
style(examples): clean up unused imports and type ignores
1 parent 066f6f2 commit db56765

4 files changed

Lines changed: 101 additions & 81 deletions

File tree

docs/langgraph_integration.md

Lines changed: 73 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -1,79 +1,97 @@
1-
# LangGraph Integration
1+
# MemU LangGraph Integration
22

3-
This integration provides a seamless way to use MemU's long-term memory capabilities within LangGraph and LangChain agents.
3+
The MemU LangGraph Integration provides a seamless adapter to expose MemU's powerful memory capabilities (`memorize` and `retrieve`) as standard [LangChain](https://python.langchain.com/) / [LangGraph](https://langchain-ai.github.io/langgraph/) tools. This allows your agents to persist information and recall it across sessions using MemU as the long-term memory backend.
44

5-
## Installation
6-
7-
Ensure you have MemU installed with the LangGraph optional dependencies:
8-
9-
```bash
10-
uv sync --extra langgraph
11-
# OR
12-
pip install memu[langgraph]
13-
```
5+
## Overview
146

15-
## Usage
7+
This integration wraps the `MemoryService` and exposes two key tools:
8+
- **`save_memory`**: Persists text, conversation snippets, or facts associated with a user.
9+
- **`search_memory`**: Retrieves relevant memories based on semantic search queries.
1610

17-
The integration exposes a helper class `MemULangGraphTools` that wraps MemU's `MemoryService` into LangChain-compatible tools.
11+
These tools are fully typed and compatible with LangGraph's `prebuilt.ToolNode` and LangChain's agents.
1812

19-
### 1. Initialize MemoryService
20-
21-
First, initialize the core memory service.
13+
## Installation
2214

23-
```python
24-
from memu.app.service import MemoryService
15+
To use this integration, you need to install the optional dependencies:
2516

26-
service = MemoryService()
17+
```bash
18+
uv add langgraph langchain-core
2719
```
2820

29-
### 2. Create Tools
21+
## Quick Start
3022

31-
Pass the service to the adapter to generate the tools.
23+
Here is a complete example of how to initialize the MemU memory service and bind it to a LangGraph agent.
3224

3325
```python
26+
import asyncio
27+
import os
28+
from memu.app.service import MemoryService
3429
from memu.integrations.langgraph import MemULangGraphTools
3530

36-
adapter = MemULangGraphTools(service)
37-
tools = adapter.tools()
38-
# tools now contains [save_memory, search_memory]
31+
# Ensure you have your configuration set (e.g., env vars for DB connection)
32+
# os.environ["MEMU_DATABASE_URL"] = "..."
33+
34+
async def main():
35+
# 1. Initialize MemoryService
36+
memory_service = MemoryService()
37+
# If your service requires async init (check your specific implementation):
38+
# await memory_service.initialize()
39+
40+
# 2. Instantiate MemULangGraphTools
41+
memu_tools = MemULangGraphTools(memory_service)
42+
43+
# Get the list of tools (BaseTool compatible)
44+
tools = memu_tools.tools()
45+
46+
# 3. Example Usage: Manually invoking a tool
47+
# In a real app, you would pass 'tools' to your LangGraph agent or StateGraph.
48+
49+
# Save a memory
50+
save_tool = memu_tools.save_memory_tool()
51+
print("Saving memory...")
52+
result = await save_tool.ainvoke({
53+
"content": "The user prefers dark mode.",
54+
"user_id": "user_123",
55+
"metadata": {"category": "preferences"}
56+
})
57+
print(f"Save Result: {result}")
58+
59+
# Search for a memory
60+
search_tool = memu_tools.search_memory_tool()
61+
print("\nSearching memory...")
62+
search_result = await search_tool.ainvoke({
63+
"query": "What are the user's preferences?",
64+
"user_id": "user_123"
65+
})
66+
print(f"Search Result:\n{search_result}")
67+
68+
if __name__ == "__main__":
69+
asyncio.run(main())
3970
```
4071

41-
### 3. Integrate with LangGraph
42-
43-
Add the tools to your LangGraph `ToolNode` or bind them to your LLM.
44-
45-
```python
46-
from langgraph.prebuilt import ToolNode
47-
from langchain_openai import ChatOpenAI
72+
## API Reference
4873

49-
# Bind tools to LLM
50-
llm = ChatOpenAI(model="gpt-4")
51-
llm_with_tools = llm.bind_tools(tools)
74+
### `MemULangGraphTools`
5275

53-
# Create ToolNode
54-
tool_node = ToolNode(tools=tools)
76+
The main adapter class.
5577

56-
# ... Build your StateGraph using tool_node ...
78+
```python
79+
class MemULangGraphTools(memory_service: MemoryService)
5780
```
5881

59-
## Available Tools
60-
61-
### `save_memory`
62-
Allows the agent to save important information, user preferences, or conversation snippets to long-term memory.
63-
64-
- **Arguments:**
65-
- `content` (str): The information to save.
66-
- `user_id` (str): The user associated with this memory.
67-
- `metadata` (dict, optional): Additional context (e.g., category, importance).
68-
69-
### `search_memory`
70-
Allows the agent to retrieve relevant information from memory based on a query.
82+
#### `save_memory_tool() -> StructuredTool`
83+
Returns a tool named `save_memory`.
84+
- **Inputs**: `content` (str), `user_id` (str), `metadata` (dict, optional).
85+
- **Description**: Save a piece of information, conversation snippet, or memory for a user.
7186

72-
- **Arguments:**
73-
- `query` (str): The question or topic to search for.
74-
- `user_id` (str): The user associated with this memory.
75-
- `limit` (int, default=5): Number of results to return.
87+
#### `search_memory_tool() -> StructuredTool`
88+
Returns a tool named `search_memory`.
89+
- **Inputs**: `query` (str), `user_id` (str), `limit` (int, default=5), `metadata_filter` (dict, optional), `min_relevance_score` (float, default=0.0).
90+
- **Description**: Search for relevant memories or information for a user based on a query.
7691

77-
## Example
92+
## Troubleshooting
7893

79-
See `examples/langgraph_demo.py` for a complete running example of a chatbot that uses MemU to remember user details across sessions.
94+
### Import Errors
95+
If you see an `ImportError` regarding `langchain_core` or `langgraph`:
96+
1. Ensure you have installed the extras: `uv add langgraph langchain-core` (or `pip install langgraph langchain-core`).
97+
2. Verify your virtual environment is active.

examples/langgraph_demo.py

Lines changed: 22 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,16 @@
11
import asyncio
22
import os
33
import sys
4+
from typing import Annotated, Any, Literal, TypedDict
45

56
# Ensure we can import from src
67
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "../src")))
78

8-
from typing import Annotated, Literal
9-
109
try:
11-
from typing import TypedDict
12-
13-
from langchain_core.messages import HumanMessage
10+
from langchain_core.messages import BaseMessage, HumanMessage
11+
from langchain_core.tools import BaseTool
1412
from langchain_openai import ChatOpenAI
15-
from langgraph.graph import END, START, StateGraph
13+
from langgraph.graph import START, StateGraph
1614
from langgraph.graph.message import add_messages
1715
from langgraph.prebuilt import ToolNode
1816
except ImportError:
@@ -25,14 +23,14 @@
2523

2624
# Define state
2725
class State(TypedDict):
28-
messages: Annotated[list, add_messages]
26+
messages: Annotated[list[BaseMessage], add_messages]
2927

3028

31-
def build_demo_graph(tools, llm_model):
29+
def build_demo_graph(tools: list[BaseTool], llm_model: ChatOpenAI) -> Any:
3230
"""Build the LangGraph state graph for the demo."""
3331
llm_with_tools = llm_model.bind_tools(tools)
3432

35-
def chatbot(state: State):
33+
def chatbot(state: State) -> dict[str, list[BaseMessage]]:
3634
return {"messages": [llm_with_tools.invoke(state["messages"])]}
3735

3836
graph_builder = StateGraph(State)
@@ -43,20 +41,21 @@ def chatbot(state: State):
4341

4442
graph_builder.add_edge(START, "chatbot")
4543

46-
def should_continue(state: State) -> Literal["tools", END]:
44+
# Note: END is actually "__end__", we use the literal string for the Type Hint to avoid MyPy error
45+
def should_continue(state: State) -> Literal["tools", "__end__"]:
4746
messages = state["messages"]
4847
last_message = messages[-1]
49-
if last_message.tool_calls:
48+
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
5049
return "tools"
51-
return END
50+
return "__end__"
5251

5352
graph_builder.add_conditional_edges("chatbot", should_continue)
5453
graph_builder.add_edge("tools", "chatbot")
5554

5655
return graph_builder.compile()
5756

5857

59-
async def initialize_infrastructure():
58+
async def initialize_infrastructure() -> tuple[MemoryService | None, list[BaseTool] | None]:
6059
"""Initialize MemoryService and MemULangGraphTools."""
6160
print("=== MemU LangGraph Demo ===")
6261
try:
@@ -73,7 +72,7 @@ async def initialize_infrastructure():
7372
return service, tools
7473

7574

76-
async def process_conversation(graph, user_input: str):
75+
async def process_conversation(graph: Any, user_input: str) -> None:
7776
"""Handle the main conversation flow."""
7877
print(f"\nUser: {user_input}")
7978
events = graph.stream({"messages": [HumanMessage(content=user_input)]}, stream_mode="values")
@@ -83,13 +82,14 @@ async def process_conversation(graph, user_input: str):
8382
last_msg = event["messages"][-1]
8483
if last_msg.type == "ai":
8584
print(f"Agent: {last_msg.content}")
86-
if last_msg.tool_calls:
85+
# Check for tool_calls safely
86+
if hasattr(last_msg, "tool_calls") and last_msg.tool_calls:
8787
print(f" (Tool Call: {last_msg.tool_calls})")
8888
elif last_msg.type == "tool":
8989
print(f"Tool Output: {last_msg.content}")
9090

9191

92-
async def process_retrieval(graph, search_input: str):
92+
async def process_retrieval(graph: Any, search_input: str) -> None:
9393
"""Handle the retrieval flow."""
9494
print(f"\nUser: {search_input}")
9595
events = graph.stream({"messages": [HumanMessage(content=search_input)]}, stream_mode="values")
@@ -101,13 +101,15 @@ async def process_retrieval(graph, search_input: str):
101101
print(f"Agent: {last_msg.content}")
102102

103103

104-
async def run_demo():
104+
async def run_demo() -> None:
105105
"""Main orchestration function."""
106-
service, tools = await initialize_infrastructure()
107-
if not service:
106+
service_and_tools = await initialize_infrastructure()
107+
service, tools = service_and_tools
108+
109+
if not service or not tools:
108110
return
109111

110-
if not os.getenv("OPENAI_API_KEY"):
112+
if not os.environ.get("OPENAI_API_KEY"):
111113
print("⚠️ OPENAI_API_KEY not found. Please set it to run the agent.")
112114
return
113115

pyproject.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,8 +26,6 @@ dependencies = [
2626
"sqlmodel>=0.0.27",
2727
"alembic>=1.14.0",
2828
"pendulum>=3.1.0",
29-
"langgraph>=1.0.6",
30-
"langchain-openai>=1.1.7",
3129
"langchain-core>=1.2.7",
3230
]
3331

@@ -45,6 +43,8 @@ dev = [
4543
{include-group = "docs"},
4644
{include-group = "lint"},
4745
{include-group = "test"},
46+
"langchain-openai>=1.1.7",
47+
"langgraph>=1.0.6",
4848
]
4949
docs = [
5050
"mkdocs>=1.6.1",

uv.lock

Lines changed: 4 additions & 4 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)