Skip to content

Commit b04b613

Browse files
committed
feat: add run server api md and change user_id to user_id
1 parent 070b2df commit b04b613

File tree

4 files changed

+227
-15
lines changed

4 files changed

+227
-15
lines changed
Lines changed: 223 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,223 @@
1+
# Server API Usage Guide
2+
3+
This guide explains how to use the MemOS Server API for memory management operations.
4+
5+
## Prerequisites
6+
7+
Before using the Server API, ensure you have the necessary dependencies installed and environment properly configured.
8+
9+
## 1. Environment Configuration
10+
11+
Create a `.env` file in the `MemOS` directory with the following configuration parameters. These settings are used by the server router configuration defined in `MemOS/src/memos/api/routers/server_router.py`.
12+
13+
### Required Environment Variables
14+
15+
#### LLM Configuration (OpenAI)
16+
```bash
17+
# OpenAI API Configuration
18+
OPENAI_API_KEY=your-api-key-here
19+
OPENAI_API_BASE=https://api.openai.com/v1
20+
MOS_CHAT_MODEL=gpt-4o-mini
21+
MOS_CHAT_TEMPERATURE=0.8
22+
MOS_MAX_TOKENS=1024
23+
MOS_TOP_P=0.9
24+
MOS_TOP_K=50
25+
MOS_CHAT_MODEL_PROVIDER=openai
26+
```
27+
28+
#### Embedder Configuration
29+
```bash
30+
# Embedder Backend: ollama or universal_api
31+
MOS_EMBEDDER_BACKEND=ollama
32+
MOS_EMBEDDER_MODEL=nomic-embed-text:latest
33+
OLLAMA_API_BASE=http://localhost:11434
34+
EMBEDDING_DIMENSION=1024
35+
36+
# If using universal_api embedder:
37+
# MOS_EMBEDDER_BACKEND=universal_api
38+
# MOS_EMBEDDER_PROVIDER=openai
39+
# MOS_EMBEDDER_API_KEY=sk-xxxx
40+
# MOS_EMBEDDER_MODEL=text-embedding-3-large
41+
# MOS_EMBEDDER_API_BASE=http://openai.com
42+
```
43+
44+
#### Graph Database Configuration
45+
```bash
46+
# Graph DB Backend: neo4j-community, neo4j, or nebular
47+
NEO4J_BACKEND=nebular
48+
49+
# Neo4j Configuration (if using neo4j or neo4j-community)
50+
# NEO4J_URI=bolt://localhost:7687
51+
# NEO4J_USER=neo4j
52+
# NEO4J_PASSWORD=12345678
53+
# NEO4J_DB_NAME=neo4j
54+
# MOS_NEO4J_SHARED_DB=false
55+
56+
# Nebular Configuration (if using nebular)
57+
NEBULAR_HOSTS=["localhost"]
58+
NEBULAR_USER=root
59+
NEBULAR_PASSWORD=xxxxxx
60+
NEBULAR_SPACE=shared-tree-textual-memory
61+
```
62+
63+
#### Vector Database Configuration (for Neo4j Community)
64+
```bash
65+
# Qdrant Configuration
66+
QDRANT_HOST=localhost
67+
QDRANT_PORT=6333
68+
```
69+
70+
#### Reranker Configuration
71+
```bash
72+
# Reranker Backend: http_bge or cosine_local
73+
MOS_RERANKER_BACKEND=http_bge
74+
MOS_RERANKER_URL=http://your-reranker-url
75+
MOS_RERANKER_MODEL=bge-reranker-v2-m3
76+
```
77+
78+
#### Optional: Internet Search Configuration
79+
```bash
80+
# Internet Search (optional)
81+
ENABLE_INTERNET=false
82+
BOCHA_API_KEY=your-bocha-api-key
83+
```
84+
85+
#### Optional: Memory Reader Configuration
86+
```bash
87+
# Memory Reader LLM (if different from chat model)
88+
MEMRADER_MODEL=gpt-4o-mini
89+
MEMRADER_API_KEY=EMPTY
90+
MEMRADER_API_BASE=https://api.openai.com/v1
91+
```
92+
93+
#### Optional: Additional Settings
94+
```bash
95+
# Enable default cube configuration
96+
MOS_ENABLE_DEFAULT_CUBE_CONFIG=true
97+
98+
# Enable memory reorganization
99+
MOS_ENABLE_REORGANIZE=false
100+
101+
# Enable activation memory
102+
ENABLE_ACTIVATION_MEMORY=false
103+
```
104+
105+
### Example .env File Location
106+
Place your `.env` file at: `MemOS/.env`
107+
108+
## 2. Start the Server
109+
110+
Navigate to the MemOS directory and start the server using uvicorn:
111+
112+
```bash
113+
cd MemOS
114+
uvicorn memos.api.server_api:app --host 0.0.0.0 --port 8002 --workers 4
115+
```
116+
117+
The server will start on `http://0.0.0.0:8002`
118+
119+
## 3. API Usage Examples
120+
121+
### 3.1 Add Memories API
122+
123+
Use this endpoint to add conversation memories to the system.
124+
125+
**Endpoint:** `POST /product/add`
126+
127+
**Request Example:**
128+
```bash
129+
curl --location --request POST 'http://127.0.0.1:8002/product/add' \
130+
--header 'Content-Type: application/json' \
131+
--data-raw '{
132+
"messages": [
133+
{
134+
"role": "user",
135+
"content": "Where should I go for Christmas?"
136+
},
137+
{
138+
"role": "assistant",
139+
"content": "There are many places to visit during Christmas, such as the Bund and Disneyland."
140+
}
141+
],
142+
"user_id": "xiaoniuma2",
143+
"mem_cube_id": "lichunyu2"
144+
}'
145+
```
146+
147+
**Request Parameters:**
148+
- `messages` (array): Conversation messages to be stored as memories
149+
- `role` (string): Message role ("user" or "assistant")
150+
- `content` (string): Message content
151+
- `user_id` (string): Unique identifier for the user
152+
- `mem_cube_id` (string): Memory cube identifier for organizing memories
153+
- `session_id` (string, optional): Session identifier (defaults to "default_session")
154+
155+
**Response:**
156+
```json
157+
{
158+
"message": "Memory added successfully",
159+
"data": [
160+
{
161+
"memory": "User wants to know where to go for Christmas",
162+
"memory_id": "uuid-generated-id",
163+
"memory_type": "fact"
164+
}
165+
]
166+
}
167+
```
168+
169+
### 3.2 Search Memories API
170+
171+
Use this endpoint to search for relevant memories based on a query.
172+
173+
**Endpoint:** `POST /server/search`
174+
175+
**Request Example:**
176+
```bash
177+
curl --location --request POST 'http://127.0.0.1:8002/server/search' \
178+
--header 'Authorization: Token mpg-7g588gVSTTKLx1sYkNv7orfqFUX4iBbZfb3xjsh3' \
179+
--header 'Content-Type: application/json' \
180+
--data-raw '{
181+
"user_id": "xiaoniuma2",
182+
"mem_cube_id": "lichunyu3",
183+
"query": "How to enjoy Christmas?"
184+
}'
185+
```
186+
187+
**Request Parameters:**
188+
- `user_id` (string): Unique identifier for the user
189+
- `mem_cube_id` (string): Memory cube identifier
190+
- `query` (string): Search query text
191+
- `session_id` (string, optional): Session identifier for filtering results
192+
- `top_k` (integer, optional): Number of top results to return (default: 5)
193+
- `mode` (string, optional): Search mode
194+
- `internet_search` (boolean, optional): Enable internet search (default: false)
195+
- `moscube` (boolean, optional): Enable moscube search (default: false)
196+
- `chat_history` (array, optional): Chat history for context
197+
198+
**Response:**
199+
```json
200+
{
201+
"message": "Search completed successfully",
202+
"data": {
203+
"text_mem": [
204+
{
205+
"cube_id": "lichunyu3",
206+
"memories": [
207+
{
208+
"id": "memory-uuid",
209+
"memory": "User wants to know where to go for Christmas",
210+
"ref_id": "[abc123]",
211+
"metadata": {
212+
"memory_type": "fact",
213+
"created_at": "2024-01-01T00:00:00Z"
214+
}
215+
}
216+
]
217+
}
218+
],
219+
"act_mem": [],
220+
"para_mem": []
221+
}
222+
}
223+
```

src/memos/api/routers/server_router.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@
3333

3434
logger = get_logger(__name__)
3535

36-
router = APIRouter(prefix="/server", tags=["Server API"])
36+
router = APIRouter(prefix="/product", tags=["Server API"])
3737

3838

3939
def _build_graph_db_config(user_id: str = "default") -> Dict[str, Any]:
@@ -209,7 +209,7 @@ def search_memories(search_req: APISearchRequest):
209209
user_name=search_req.mem_cube_id,
210210
top_k=search_req.top_k,
211211
mode=search_req.mode,
212-
internet_search=not search_req.internet_search,
212+
manual_close_internet=not search_req.internet_search,
213213
moscube=search_req.moscube,
214214
search_filter=search_filter,
215215
info={

src/memos/memories/textual/simple_tree.py

Lines changed: 1 addition & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -73,18 +73,7 @@ def __init__(self,
7373
logger.info(f"time init: reranker time is: {time.time() - time_start_rr}")
7474

7575
time_start_mm = time.time()
76-
self.memory_manager: MemoryManager = MemoryManager(
77-
self.graph_store,
78-
self.embedder,
79-
self.extractor_llm,
80-
memory_size=config.memory_size
81-
or {
82-
"WorkingMemory": 20,
83-
"LongTermMemory": 1500,
84-
"UserMemory": 480,
85-
},
86-
is_reorganize=is_reorganize,
87-
)
76+
self.memory_manager: MemoryManager = memory_manager
8877
logger.info(f"time init: memory_manager time is: {time.time() - time_start_mm}")
8978
time_start_ir = time.time()
9079
# Create internet retriever if configured

src/memos/memories/textual/tree_text_memory/retrieve/searcher.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ def search(
8686
logger.debug(f"[SEARCH] Received info dict: {info}")
8787

8888
parsed_goal, query_embedding, context, query = self._parse_task(
89-
query, info, mode, search_filter=search_filter, user_id=user_id
89+
query, info, mode, search_filter=search_filter, user_name=user_name
9090
)
9191
results = self._retrieve_paths(
9292
query, parsed_goal, query_embedding, info, top_k, mode, memory_type, search_filter,user_name

0 commit comments

Comments
 (0)