Skip to content

Conversation

@3coins
Copy link
Contributor

@3coins 3coins commented Oct 29, 2025

Summary

This PR introduces a React agent with persistent memory capabilities and tooling support. The changes transform the simple chat-based persona into an agent that can interact with the Jupyter environment through various tools while maintaining conversation context across sessions.

agent-with-tools-working.mp4

Notes

  • There is further work needed to refine the prompt, so that the agent creates correct code content for cells. Because the current prompt is focused on chat experience, LLM code outputs are wrapped in markdown blocks.
  • Although the included tools mostly work, I have only been able to test them with simple direct prompts, we will need to tweak the tools to support complex scenarios.

@3coins 3coins added the enhancement New feature or request label Oct 29, 2025
@3coins 3coins marked this pull request as ready for review November 1, 2025 02:53
@3coins 3coins changed the title WIP: Added a react agent with persistent memory Added a react agent with persistent memory Nov 1, 2025
Copy link
Contributor

@dlqqq dlqqq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@3coins Thank you for working on this while I was busy. Left a few minor suggestions on the dependencies that we ought to address before merging & releasing. Everything else is non-blocking.

I ran into the same issue you had encountered in the demo. Namely, when asking Jupyternaut to create a notebook, it just creates the file with no content. We probably need some kind of while loop to keep it going, but we can improve that in a future release.

We can merge & release this to include it in the metapackage soon.

Comment on lines +169 to +171
{"messages": [{"role": "user", "content": message.body}]},
{"configurable": {"thread_id": self.ychat.get_id()}},
stream_mode="messages",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(non-blocking) Since we're only adding to the SQLite checkpointer when this persona is called, does this mean that Jupyternaut will lack context on messages not routed to Jupyternaut?

For example, consider the following chat:

User: Hello, what is the Riemann hypothesis?
<SomePersona>: <complete nonsense>
User: @Jupyternaut can you try to answer this?
# does Jupyternaut have context on the 2 preceding messages?

This is fine for now, just checking to see if I understand the current behavior.

Copy link
Contributor Author

@3coins 3coins Nov 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct, we need a shared memory manager (or a store) in persona manager or base persona that enables personas to write messages for shared context along with an API to load the shared context.

@dlqqq dlqqq merged commit 4923218 into jupyter-ai-contrib:main Nov 2, 2025
3 of 4 checks passed
return nb_toolkit

async def get_agent(self, model_id: str, model_args, system_prompt: str):
model = ChatLiteLLM(**model_args, model_id=model_id, streaming=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that the correct parameter should be model=model_id (model instead of model_id), according to the ChatLiteLLM attribute.

When testing this PR, the backend is complaining about missing OpenAi API key. Trying to debug it, it seems that the model setup in ChatLiteLLM is always the default one, gpt-3.5-turbo.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opened #19 to fix it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants