|
1 | 1 | SIMPLE_STRUCT_MEM_READER_PROMPT = """You are a memory extraction expert. |
2 | | -Always respond in the same language as the conversation. If the conversation is in Chinese, respond in Chinese. |
3 | | -
|
4 | | -Your task is to extract memories from the perspective of ${user_a}, based on a conversation between ${user_a} and ${user_b}. This means identifying what ${user_a} would plausibly remember — including their own experiences, thoughts, plans, or relevant statements and actions made by others (such as ${user_b}) that impacted or were acknowledged by ${user_a}. |
5 | | -
|
| 2 | +Your task is to extract memories from the perspective of user, based on a conversation between user and assistant. This means identifying what user would plausibly remember — including their own experiences, thoughts, plans, or relevant statements and actions made by others (such as assistant) that impacted or were acknowledged by user. |
6 | 3 | Please perform: |
7 | 4 | 1. Identify information that reflects user's experiences, beliefs, concerns, decisions, plans, or reactions — including meaningful input from assistant that user acknowledged or responded to. |
8 | 5 | 2. Resolve all time, person, and event references clearly: |
|
27 | 24 | { |
28 | 25 | "key": <string, a unique, concise memory title>, |
29 | 26 | "memory_type": <string, Either "LongTermMemory" or "UserMemory">, |
30 | | - "value": <A detailed, self-contained, and unambiguous memory statement |
31 | | - — written in English if the input conversation is in English, |
32 | | - or in Chinese if the conversation is in Chinese, or any language which |
33 | | - align with the conversation language>, |
| 27 | + "value": <A detailed, self-contained, and unambiguous memory statement — written in English if the input conversation is in English, or in Chinese if the conversation is in Chinese>, |
34 | 28 | "tags": <A list of relevant thematic keywords (e.g., ["deadline", "team", "planning"])> |
35 | 29 | }, |
36 | 30 | ... |
37 | 31 | ], |
38 | | - "summary": <a natural paragraph summarizing the above memories from user's |
39 | | - perspective, 120–200 words, **same language** as the input> |
| 32 | + "summary": <a natural paragraph summarizing the above memories from user's perspective, 120–200 words, same language as the input> |
40 | 33 | } |
41 | 34 |
|
42 | 35 | Language rules: |
43 | | -- The `key`, `value`, `tags`, `summary` fields must match the language of the input conversation. |
| 36 | +- The `key`, `value`, `tags`, `summary` fields must match the mostly used language of the input conversation. **如果输入是中文,请输出中文** |
44 | 37 | - Keep `memory_type` in English. |
45 | 38 |
|
46 | 39 | Example: |
|
92 | 85 |
|
93 | 86 | Your Output:""" |
94 | 87 |
|
95 | | -SIMPLE_STRUCT_DOC_READER_PROMPT = """ |
96 | | -**ABSOLUTE, NON-NEGOTIABLE, CRITICAL RULE: The language of your entire JSON output's string values (specifically `summary` and `tags`) MUST be identical to the language of the input `[DOCUMENT_CHUNK]`. There are absolutely no exceptions. Do not translate. If the input is Chinese, the output must be Chinese. If English, the output must be English. Any deviation from this rule constitutes a failure to follow instructions.** |
97 | | -
|
98 | | -You are an expert text analyst for a search and retrieval system. Your task is to process a document chunk and generate a single, structured JSON object. |
99 | | -Written in English if the input conversation is in English, or in Chinese if |
100 | | -the conversation is in Chinese, or any language which align with the |
101 | | -conversation language. 如果输入语言是中文,请务必输出中文。 |
102 | | -
|
103 | | -The input is a single piece of text: `[DOCUMENT_CHUNK]`. |
104 | | -You must generate a single JSON object with two top-level keys: `summary` and `tags`. |
105 | | -Written in English if the input conversation is in English, or in Chinese if |
106 | | -the conversation is in Chinese, or any language which align with the conversation language. |
107 | | -
|
108 | | -1. `summary`: |
109 | | - - A dense, searchable summary of the ENTIRE `[DOCUMENT_CHUNK]`. |
110 | | - - The purpose is for semantic search embedding. |
111 | | - - A clear and accurate sentence that comprehensively summarizes the main points, arguments, and information within the `[DOCUMENT_CHUNK]`. |
112 | | - - The goal is to create a standalone overview that allows a reader to fully understand the essence of the chunk without reading the original text. |
113 | | - - The summary should be **no more than 50 words**. |
114 | | -2. `tags`: |
115 | | - - A concise list of **3 to 5 high-level, summative tags**. |
116 | | - - **Each tag itself should be a short phrase, ideally 2 to 4 words long.** |
117 | | - - These tags must represent the core abstract themes of the text, suitable for broad categorization. |
118 | | - - **Crucially, prioritize abstract concepts** over specific entities or phrases mentioned in the text. For example, prefer "Supply Chain Resilience" over "Reshoring Strategies". |
119 | | -
|
120 | | -Here is the document chunk to process: |
121 | | -`[DOCUMENT_CHUNK]` |
| 88 | +SIMPLE_STRUCT_DOC_READER_PROMPT = """You are an expert text analyst for a search and retrieval system. |
| 89 | +Your task is to process a document chunk and generate a single, structured JSON object. |
| 90 | +
|
| 91 | +Please perform: |
| 92 | +1. Identify key information that reflects factual content, insights, decisions, or implications from the documents — including any notable themes, conclusions, or data points. Allow a reader to fully understand the essence of the chunk without reading the original text. |
| 93 | +2. Resolve all time, person, location, and event references clearly: |
| 94 | + - Convert relative time expressions (e.g., “last year,” “next quarter”) into absolute dates if context allows. |
| 95 | + - Clearly distinguish between event time and document time. |
| 96 | + - If uncertainty exists, state it explicitly (e.g., “around 2024,” “exact date unclear”). |
| 97 | + - Include specific locations if mentioned. |
| 98 | + - Resolve all pronouns, aliases, and ambiguous references into full names or identities. |
| 99 | + - Disambiguate entities with the same name if applicable. |
| 100 | +3. Always write from a third-person perspective, referring to the subject or content clearly rather than using first-person ("I", "me", "my"). |
| 101 | +4. Do not omit any information that is likely to be important or memorable from the document summaries. |
| 102 | + - Include all key facts, insights, emotional tones, and plans — even if they seem minor. |
| 103 | + - Prioritize completeness and fidelity over conciseness. |
| 104 | + - Do not generalize or skip details that could be contextually meaningful. |
| 105 | +
|
| 106 | +Return a single valid JSON object with the following structure: |
| 107 | +
|
| 108 | +Return valid JSON: |
| 109 | +{ |
| 110 | + "key": <string, a concise title of the `value` field>, |
| 111 | + "memory_type": "LongTermMemory", |
| 112 | + "value": <A clear and accurate paragraph that comprehensively summarizes the main points, arguments, and information within the document chunk — written in English if the input memory items are in English, or in Chinese if the input is in Chinese>, |
| 113 | + "tags": <A list of relevant thematic keywords (e.g., ["deadline", "team", "planning"])> |
| 114 | +} |
| 115 | +
|
| 116 | +Language rules: |
| 117 | +- The `key`, `value`, `tags`, `summary` fields must match the mostly used language of the input document summaries. **如果输入是中文,请输出中文** |
| 118 | +- Keep `memory_type` in English. |
| 119 | +
|
| 120 | +Document chunk: |
122 | 121 | {chunk_text} |
123 | 122 |
|
124 | | -Produce ONLY the JSON object as your response. |
125 | | -""" |
| 123 | +Your Output:""" |
126 | 124 |
|
127 | 125 | SIMPLE_STRUCT_MEM_READER_EXAMPLE = """Example: |
128 | 126 | Conversation: |
|
0 commit comments