Clarification needed: Multi-user isolation, storage granularity, and handling hallucination loops #183
talhaanwarch
started this conversation in
General
Replies: 1 comment 1 reply
-
|
Yes, by default, when you enable Memori, it is designed to automatically process and store the entire conversation history to build a comprehensive, evolving context for the agent. Example:- # User A's session All interactions for User A use agent_aUser B's sessionagent_b = Memori(session_id="user_b_unique_id") All interactions for User B use agent_b |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am currently evaluating Memori for a potential application and have three specific questions regarding architecture and data integrity before I proceed:
Scope of Memory Storage Does memori.enable() automatically store the entire conversation history by default? Is there a way to configure it to only store specific parts of the conversation, or is it an "all-or-nothing" storage engine?
Multi-User / Multi-Tenancy Support If I have multiple distinct users interacting with the same application backend:
How does Memori handle data isolation?
Is there a built-in mechanism (like session IDs or namespaces) to ensure User A’s conversation context is never shared with or accessible to User B?
Scenario: If the LLM generates a hallucinated fact and Memori extracts/stores it as an entity or context.
Risk: In future queries, will Memori retrieve this "false fact" and inject it as context, effectively reinforcing the hallucination?
Question: Does Memori have any existing validation steps or "fact-checking" layers before committing data to the long-term memory to prevent this kind of memory poisoning?
Beta Was this translation helpful? Give feedback.
All reactions