-
Notifications
You must be signed in to change notification settings - Fork 4
Added a react agent with persistent memory #17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
dlqqq
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@3coins Thank you for working on this while I was busy. Left a few minor suggestions on the dependencies that we ought to address before merging & releasing. Everything else is non-blocking.
I ran into the same issue you had encountered in the demo. Namely, when asking Jupyternaut to create a notebook, it just creates the file with no content. We probably need some kind of while loop to keep it going, but we can improve that in a future release.
We can merge & release this to include it in the metapackage soon.
| {"messages": [{"role": "user", "content": message.body}]}, | ||
| {"configurable": {"thread_id": self.ychat.get_id()}}, | ||
| stream_mode="messages", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(non-blocking) Since we're only adding to the SQLite checkpointer when this persona is called, does this mean that Jupyternaut will lack context on messages not routed to Jupyternaut?
For example, consider the following chat:
User: Hello, what is the Riemann hypothesis?
<SomePersona>: <complete nonsense>
User: @Jupyternaut can you try to answer this?
# does Jupyternaut have context on the 2 preceding messages?
This is fine for now, just checking to see if I understand the current behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct, we need a shared memory manager (or a store) in persona manager or base persona that enables personas to write messages for shared context along with an API to load the shared context.
| return nb_toolkit | ||
|
|
||
| async def get_agent(self, model_id: str, model_args, system_prompt: str): | ||
| model = ChatLiteLLM(**model_args, model_id=model_id, streaming=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that the correct parameter should be model=model_id (model instead of model_id), according to the ChatLiteLLM attribute.
When testing this PR, the backend is complaining about missing OpenAi API key. Trying to debug it, it seems that the model setup in ChatLiteLLM is always the default one, gpt-3.5-turbo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opened #19 to fix it.
Summary
This PR introduces a React agent with persistent memory capabilities and tooling support. The changes transform the simple chat-based persona into an agent that can interact with the Jupyter environment through various tools while maintaining conversation context across sessions.
agent-with-tools-working.mp4
Notes