Skip to content

Commit fafc747

Browse files
authored
eval result (#428)
* add breakpoint in eval scripts * feat(locomo): 支持断点续传 * format code * doc: Update readme eval result
1 parent 88699f9 commit fafc747

File tree

2 files changed

+38
-25
lines changed

2 files changed

+38
-25
lines changed

README.md

Lines changed: 14 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -54,22 +54,20 @@
5454

5555
## 📈 Performance Benchmark
5656

57-
MemOS demonstrates significant improvements over baseline memory solutions in multiple reasoning tasks.
58-
59-
| Model | Avg. Score | Multi-Hop | Open Domain | Single-Hop | Temporal Reasoning |
60-
|-------------|------------|-----------|-------------|------------|---------------------|
61-
| **OpenAI** | 0.5275 | 0.6028 | 0.3299 | 0.6183 | 0.2825 |
62-
| **MemOS** | **0.7331** | **0.6430** | **0.5521** | **0.7844** | **0.7321** |
63-
| **Improvement** | **+38.98%** | **+6.67%** | **+67.35%** | **+26.86%** | **+159.15%** |
64-
65-
> 💡 **Temporal reasoning accuracy improved by 159% compared to the OpenAI baseline.**
66-
67-
### Details of End-to-End Evaluation on LOCOMO
68-
69-
> [!NOTE]
70-
> Comparison of LLM Judge Scores across five major tasks in the LOCOMO benchmark. Each bar shows the mean evaluation score judged by LLMs for a given method-task pair, with standard deviation as error bars. MemOS-0630 consistently outperforms baseline methods (LangMem, Zep, OpenAI, Mem0) across all task types, especially in multi-hop and temporal reasoning scenarios.
71-
72-
<img src="https://statics.memtensor.com.cn/memos/score_all_end2end.jpg" alt="END2END SCORE">
57+
MemOS demonstrates significant improvements over baseline memory solutions in multiple memory tasks,
58+
showcasing its capabilities in **information extraction**, **temporal and cross-session reasoning**, and **personalized preference responses**.
59+
60+
| Model | LOCOMO | LongMemEval | PrefEval-10 | PersonaMem |
61+
|-----------------|-------------|-------------|-------------|-------------|
62+
| **GPT-4o-mini** | 52.75 | 55.4 | 2.8 | 43.46 |
63+
| **MemOS** | **75.80** | **77.80** | **71.90** | **61.17** |
64+
| **Improvement** | **+43.70%** | **+40.43%** | **+2568%** | **+40.75%** |
65+
66+
### Detailed Evaluation Results
67+
- We use gpt-4o-mini as the processing and judging LLM and bge-m3 as embedding model in MemOS evaluation.
68+
- The evaluation was conducted under conditions that align various settings as closely as possible. Reproduce the results with our scripts at [`evaluation`](./evaluation).
69+
- Check the full search and response details at huggingface https://huggingface.co/datasets/MemTensor/MemOS_eval_result.
70+
> 💡 **MemOS outperforms all other methods (Mem0, Zep, Memobase, SuperMemory et al.) across all benchmarks!**
7371
7472
## ✨ Key Features
7573

evaluation/scripts/locomo/locomo_ingestion.py

Lines changed: 24 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ def ingest_session(client, session, frame, version, metadata):
8888
return elapsed_time
8989

9090

91-
def process_user(conv_idx, frame, locomo_df, version):
91+
def process_user(conv_idx, frame, locomo_df, version, success_records, f):
9292
conversation = locomo_df["conversation"].iloc[conv_idx]
9393
max_session_count = 35
9494
start_time = time.time()
@@ -149,11 +149,15 @@ def process_user(conv_idx, frame, locomo_df, version):
149149

150150
print(f"Processing {valid_sessions} sessions for user {conv_idx}")
151151

152-
for session, metadata in sessions_to_process:
153-
session_time = ingest_session(client, session, frame, version, metadata)
154-
total_session_time += session_time
155-
print(f"User {conv_idx}, {metadata['session_key']} processed in {session_time} seconds")
156-
152+
for session_idx, (session, metadata) in enumerate(sessions_to_process):
153+
if f"{conv_idx}_{session_idx}" not in success_records:
154+
session_time = ingest_session(client, session, frame, version, metadata)
155+
total_session_time += session_time
156+
print(f"User {conv_idx}, {metadata['session_key']} processed in {session_time} seconds")
157+
f.write(f"{conv_idx}_{session_idx}\n")
158+
f.flush()
159+
else:
160+
print(f"Session {conv_idx}_{session_idx} already ingested")
157161
end_time = time.time()
158162
elapsed_time = round(end_time - start_time, 2)
159163
print(f"User {conv_idx} processed successfully in {elapsed_time} seconds")
@@ -170,9 +174,20 @@ def main(frame, version="default", num_workers=4):
170174
print(
171175
f"Starting processing for {num_users} users in serial mode, each user using {num_workers} workers for sessions..."
172176
)
173-
with concurrent.futures.ThreadPoolExecutor(max_workers=num_workers) as executor:
177+
os.makedirs(f"results/locomo/{frame}-{version}/", exist_ok=True)
178+
success_records = []
179+
record_file = f"results/locomo/{frame}-{version}/success_records.txt"
180+
if os.path.exists(record_file):
181+
with open(record_file) as f:
182+
for i in f.readlines():
183+
success_records.append(i.strip())
184+
185+
with (
186+
concurrent.futures.ThreadPoolExecutor(max_workers=num_workers) as executor,
187+
open(record_file, "a+") as f,
188+
):
174189
futures = [
175-
executor.submit(process_user, user_id, frame, locomo_df, version)
190+
executor.submit(process_user, user_id, frame, locomo_df, version, success_records, f)
176191
for user_id in range(num_users)
177192
]
178193
for future in concurrent.futures.as_completed(futures):
@@ -216,7 +231,7 @@ def main(frame, version="default", num_workers=4):
216231
help="Version identifier for saving results (e.g., 1010)",
217232
)
218233
parser.add_argument(
219-
"--workers", type=int, default=3, help="Number of parallel workers to process users"
234+
"--workers", type=int, default=10, help="Number of parallel workers to process users"
220235
)
221236
args = parser.parse_args()
222237
lib = args.lib

0 commit comments

Comments
 (0)