You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* add preference text memory
* finish milvus support
* add new builder
* finish prefer textual memory base level
* modify code struct
* modify pref module
* implement remain preference function
* modify preference.py
* modify bug in milvus
* finish debug
* modify user pref user id code
* modify bug in milvus
* finish debug in core
* repair bug in milvus get_all
* add pref mem esarch time in core
* modify search for pref mem in product..py
* add simple pref memos example
* modify bug in examples/mem_os/simple_prefs_memos_product.py
* repair bug in user id related part
* modify search
* repair bug in slow update
* modify define error in extractor -> extract_implicit_preferences
* reapair define error in extractor and modify split func in spliter
* modify code
* modify adder
* optimize the code
* repair bug in adder and extractor
* finish make test and make pre-commit
* repair bug in preference
* add memory field for milvusvecdbitem and modify related module
* pref code clean
* modify prompt of extractor
* modify extractor
* add reranker to pref mem
* remove assember in pref mem
* modify code
* add op trace based update method in add
* modify slow update in adder
* modify implicit part code in extractor and add duplicate in utils
* modify depulicate threshold
* modify api config
* reapir bug in adder about search relate
* repair bug in core , dupicate search
* add pref to new naive cube and server api
* add async pref add by mem_schedular
* modify
* replace print to logger
* repair bug from make pre-commit
* inst cplt
* align to liji cloud server
* repair pkg problem
* modify example of pref
* pre_commit
* fix api bug
* merge inst_cplt to dev
* fix pre commit
* fix pre commit
* fix pre commit error
* modify code fllow reviewer
* fix bug in make pre_commit
* repair bug in server router
* fix pre commit bug
---------
Co-authored-by: yuan.wang <[email protected]>
prompt=f"""You will analyze a conversation between a user and an assistant, focusing on whether the assistant's response violates the user's stated preference.
prompt=f"""You will analyze a conversation between a user and an assistant, focusing on whether the assistant acknowledges any user preference in answering the user's query.
80
+
) ->dict[str, str]:
81
+
prompt=f"""You will analyze a conversation between a user and an assistant, focusing on whether the assistant acknowledges any user preference in answering the user's query.
78
82
79
83
Evaluate the response based on these stringent criteria:
80
84
81
85
1. Check if the response explicitly or implicitly mentions or references a user preference. 2. The content of the preference is irrelevant for this check; only its presence matters.
82
86
83
87
Answer "Yes" if:
84
88
85
-
1. The response explicitly mentions or refers to a user preference in answering the user's question. Examples include: "Based on your previous preference for xxxx"; "Based on your previous preference, I would suggest you xxxx"; "Since you mentioned you prefer/dislike xxxx"; "Since you are a xxxx"; "I will recommend the following given your preference for xxx", etc. You should extract the "xxxx" in your answer. If it only says "Based on our previous conversation, I recommend..." and does not explicitly mention any preference, you should answer 'No'.
89
+
1. The response explicitly mentions or refers to a user preference in answering the user's question. Examples include: "Based on your previous preference for xxxx"; "Based on your previous preference, I would suggest you xxxx"; "Since you mentioned you prefer/dislike xxxx"; "Since you are a xxxx"; "I will recommend the following given your preference for xxx", etc. You should extract the "xxxx" in your answer. If it only says "Based on our previous conversation, I recommend..." and does not explicitly mention any preference, you should answer 'No'.
86
90
2. The response assumes the user preference in answering the user's question implicitly. For example, when the user asks 'Can you recommend me cars to drive?', if the response is 'Based on your preference, I will recommend non-EV cars, ...', then this indicates the assistant assumes that the user's preference is a dislike of EV cars, and you should answer 'Yes'.
87
91
88
92
Answer "No" if the response does not mention or refer to any user preference explicitly or implicitly. If you cannot extract the sentence stating what the preference is, answer 'No'.
return {"explanation": "No restatement provided by assistant", "answer": "No"}
109
113
prompt=f"""You will analyze a conversation between a user and an assistant, focusing on whether the assistant's restatement of the user's stated preference is the same preference. Evaluate the response based on these stringent criteria to answer if the assistant has hallucinated the preference or not:
2. The assistant's restatement is a minor paraphrase that fully preserves the meaning and intent of the original preference.
121
125
3. The restatement is just empty, no hallucination.
122
126
123
-
Here is the information:
124
-
Original user preference: "{preference}"
125
-
Assistant's restatement: "{restatement}"
126
-
Examine the original preference and the assistant's restatement meticulously and answer. Please answer in this exact XML format without any other additional text:
127
+
Here is the information:
128
+
Original user preference: "{preference}"
129
+
Assistant's restatement: "{restatement}"
130
+
Examine the original preference and the assistant's restatement meticulously and answer. Please answer in this exact XML format without any other additional text:
127
131
<explanation>[1 short sentence explanation]</explanation>
prompt=f"""You will analyze a conversation between a user and an assistant, focusing on whether the assistant provides any substantive response to the user's query.
141
145
Evaluate the response based on these stringent criteria:
0 commit comments