Question related to agent with memory. #3499
-
Hi all, Recently, I tried to develop an agent with memory for my side project. However, here is something I am confused about...
To my understanding, state["messages"] should contain all messages, including the current one and all past messages, when we set memorySaver as a checkpoint of the graph. According to the code above, it seems that the LLM re-invokes all messages, even those that were already handled in previous chats. Is that correct? This doesn’t seem to make sense to me, as it would unnecessarily process messages that have already been handled. What if I want the agent to retain the context without re-invoking previous messages? Additionally, if I modify the node to invoke only the current message, I notice that the context disappears.
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
LLMs are stateless. Oversimplified, each invocation should include the accumulated chat history. If you didn't they wouldn't be able to do simple things like reference coresolution: Invocation 1: a) With history: Invocation 2: b) Without: If you look at the LLM runs in langsmith for both cases, it's pretty obvious. |
Beta Was this translation helpful? Give feedback.
LLMs are stateless. Oversimplified, each invocation should include the accumulated chat history. If you didn't they wouldn't be able to do simple things like reference coresolution:
Invocation 1:
User: "What's the weather like today"?
Assistant: "It's rainy and 25 degrees"
a) With history:
Invocation 2:
User: "What's the weather like today"?
Assistant: "It's rainy and 25 degrees"
User: "oy that's cold"
Assistant: "Yes grab a jacket!"
b) Without:
User: "oy that's cold"
Assistant: "What's cold?
If you look at the LLM runs in langsmith for both cases, it's pretty obvious.