Skip to content

Commit a52a9e8

Browse files
Wang-Daojiyuan.wang
andauthored
Feat/fix palyground bug (#621)
* fix playground bug, internet search judge * fix playground internet bug * modify delete mem * modify tool resp bug in multi cube * fix bug in playground chat handle and search inter * modify prompt --------- Co-authored-by: yuan.wang <[email protected]>
1 parent 7866f21 commit a52a9e8

File tree

3 files changed

+21
-10
lines changed

3 files changed

+21
-10
lines changed

src/memos/api/handlers/chat_handler.py

Lines changed: 15 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -159,9 +159,11 @@ def handle_chat_complete(self, chat_req: APIChatCompleteRequest) -> dict[str, An
159159

160160
# Step 3: Generate complete response from LLM
161161
if chat_req.model_name_or_path and chat_req.model_name_or_path not in self.chat_llms:
162-
return {
163-
"message": f"Model {chat_req.model_name_or_path} not suport, choose from {list(self.chat_llms.keys())}"
164-
}
162+
raise HTTPException(
163+
status_code=400,
164+
detail=f"Model {chat_req.model_name_or_path} not suport, choose from {list(self.chat_llms.keys())}",
165+
)
166+
165167
model = chat_req.model_name_or_path or next(iter(self.chat_llms.keys()))
166168
response = self.chat_llms[model].generate(current_messages, model_name_or_path=model)
167169

@@ -281,9 +283,11 @@ def generate_chat_response() -> Generator[str, None, None]:
281283
chat_req.model_name_or_path
282284
and chat_req.model_name_or_path not in self.chat_llms
283285
):
284-
return {
285-
"message": f"Model {chat_req.model_name_or_path} not suport, choose from {list(self.chat_llms.keys())}"
286-
}
286+
raise HTTPException(
287+
status_code=400,
288+
detail=f"Model {chat_req.model_name_or_path} not suport, choose from {list(self.chat_llms.keys())}",
289+
)
290+
287291
model = chat_req.model_name_or_path or next(iter(self.chat_llms.keys()))
288292
response_stream = self.chat_llms[model].generate_stream(
289293
current_messages, model_name_or_path=model
@@ -517,9 +521,11 @@ def generate_chat_response() -> Generator[str, None, None]:
517521
chat_req.model_name_or_path
518522
and chat_req.model_name_or_path not in self.chat_llms
519523
):
520-
return {
521-
"message": f"Model {chat_req.model_name_or_path} not suport, choose from {list(self.chat_llms.keys())}"
522-
}
524+
raise HTTPException(
525+
status_code=400,
526+
detail=f"Model {chat_req.model_name_or_path} not suport, choose from {list(self.chat_llms.keys())}",
527+
)
528+
523529
model = chat_req.model_name_or_path or next(iter(self.chat_llms.keys()))
524530
response_stream = self.chat_llms[model].generate_stream(
525531
current_messages, model_name_or_path=model

src/memos/memories/textual/tree_text_memory/retrieve/searcher.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -227,7 +227,8 @@ def _parse_task(
227227
query_embedding = None
228228

229229
# fine mode will trigger initial embedding search
230-
if mode == "fine_old":
230+
# TODO: tmp "playground_search_goal_parser" for playground search goal parser, will be removed later
231+
if mode == "fine_old" or kwargs.get("playground_search_goal_parser", False):
231232
logger.info("[SEARCH] Fine mode: embedding search")
232233
query_embedding = self.embedder.embed([query])[0]
233234

src/memos/templates/mos_prompts.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -130,13 +130,17 @@
130130
- Intelligently choose which memories (PersonalMemory[P] or OuterMemory[O]) are most relevant to the user's query
131131
- Only reference memories that are directly relevant to the user's question
132132
- Prioritize the most appropriate memory type based on the context and nature of the query
133+
- Responses must not contain non-existent citations
134+
- Explicit and implicit preferences can be referenced if relevant to the user's question, but must not be cited or source-attributed in responses
133135
- **Attribution-first selection:** Distinguish memory from user vs from assistant ** before composing. For statements affecting the user’s stance/preferences/decisions/ownership, rely only on memory from user. Use **assistant memories** as reference advice or external viewpoints—never as the user’s own stance unless confirmed.
134136
135137
### Response Style
136138
- Make your responses natural and conversational
137139
- Seamlessly incorporate memory references when appropriate
138140
- Ensure the flow of conversation remains smooth despite memory citations
139141
- Balance factual accuracy with engaging dialogue
142+
- Avoid meaningless blank lines
143+
- Keep the reply language consistent with the user's query language
140144
141145
## Key Principles
142146
- Reference only relevant memories to avoid information overload

0 commit comments

Comments
 (0)