Shared vs Distributed Memory #1877
Unanswered
AdrianaDuarteCorreia
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I've been struggling with how to set up a memory structure for a hierarchical agent system.
In the official cookbook example , there is one memory state per team, such that several messages coming from different roles (system, assistant user), succeed each other, in what I describe is as a shared memory state. When the messages are passed between different teams, or to the manager, there is only one final message sent from each team, so the manager upholds what could be called a distributed.
What I have found though is that when trying to implement these systems in production, and especially with open models, it is highly unreliable to have these shared memory states, since 1) not all models support several system messages and 2) the predictability of each agent system prompt decreases if the entire prompt also includes messages from other agents. Here I'm thinking about the middle Llama3, for instance.
My impression is that these shared memory states are optimized for larger (closed) models, but that they don't scale as well for other models.
How is the community thinking about this? Is there a native way to operate distributed memory teams in Langgraph? And what would be ideal message being passed between agents?
Beta Was this translation helpful? Give feedback.
All reactions