Skip to content

Releases: jupyterlab/jupyter-ai

v3.0.0beta7

10 Sep 18:36

Choose a tag to compare

v3.0.0beta7 Pre-release
Pre-release

3.0.0beta7

This release notably upgrades to jupyterlab-chat==0.17.0, which is in the process of being published onto Conda Forge. This release is being targeted as the first V3 release to be published onto Conda Forge! πŸŽ‰

This release also fixes a bug that prevented some users from starting Jupyter AI locally on v3.0.0b6. Thank you @andreyvelich for contributing that fix so quickly! πŸ’ͺ

Finally, we've also added some enhancements & fixes for the magic commands & the model parameters UI. πŸ€—

(Full Changelog)

Enhancements made

Bugs fixed

Maintenance and upkeep improvements

Contributors to this release

(GitHub contributors page for this release)

@andreyvelich | @brichet | @dlqqq | @ellisonbg | @jonahjung22 | @jtpio | @srdas

v3.0.0beta6

22 Aug 18:57

Choose a tag to compare

v3.0.0beta6 Pre-release
Pre-release

3.0.0beta6

This release includes several major upgrades to Jupyter AI v3, most notably migrating from Langchain to LiteLLM.

  • πŸŽ‰ Jupyter AI now provides >1000 LLMs out-of-the-box, without requiring an optional dependency for most providers. The only optional dependency that you may need is boto3, which is required for Amazon Bedrock models.

  • πŸš€ Jupyter AI is significantly faster to install and start. The Jupyter AI server extension startup time has been reduced from ~10000ms to ~2500ms (-75 pp). The remaining startup latency mostly comes from the time it takes to import jupyter_ai. We plan to improve this further by iterating on #1115.

  • πŸ’ͺ We have completely overhauled the AI settings page & simplified the model configuration process. The new AI settings page allows you to type in any LiteLLM model ID, without being restricted to the suggestions that appear as a popup. This will allow you to use the latest LLMs as soon as they are released, even if they have not yet been added to the model lists in our source code.

    • By v3, users will also be able to define custom model parameters, which are passed directly as keyword arguments to litellm.acompletion(). Users will not have to request maintainers to add fields to models anymore.
  • πŸ”‘ Finally, we've greatly simplified the process of providing your API keys. All API keys can now be defined as environment variables directly passed to jupyter-lab. You may also define API keys locally in the .env file at your workspace root, which is used throughout all of Jupyter AI. You can edit the .env file directly, use the UI we provide in the AI settings page.

There are some minor breaking changes:

  • The path local personas are loaded from has been moved from .jupyter/ to .jupyter/personas.

  • The new "model parameters" section has a couple of bugs that will be fixed in future pre-releases.

  • We have temporary hidden the "inline completion model" section until we refactor the backend to work with LiteLLM. That work is being tracked in #1431. Contributions welcome.

  • We have also hidden the "embedding model" section. We plan for Jupyternaut to automatically gather the context it needs entirely through agentic tool-calling, which may remove the need for a vector store & embedding model. This may change in the future depending on the results on this effort.

(Full Changelog)

Enhancements made

Contributors to this release

(GitHub contributors page for this release)

@andrii-i | @cszhbo | @dlqqq | @jonahjung22 | @srdas

v3.0.0beta5

25 Jul 13:51

Choose a tag to compare

v3.0.0beta5 Pre-release
Pre-release

3.0.0beta5

(Full Changelog)

Enhancements made

Maintenance and upkeep improvements

Contributors to this release

(GitHub contributors page for this release)

@anthonyhungnguyen | @dlqqq | @joadoumie

v2.31.6

25 Jul 13:24

Choose a tag to compare

2.31.6

(Full Changelog)

Enhancements made

Maintenance and upkeep improvements

Documentation improvements

  • Updated documentation for using Ollama with cell magics on non-default port #1370 (@srdas)

Contributors to this release

(GitHub contributors page for this release)

@dlqqq | @ellisonbg | @meeseeksmachine

v3.0.0beta4

10 Jul 22:05

Choose a tag to compare

v3.0.0beta4 Pre-release
Pre-release

3.0.0beta4

(Full Changelog)

Enhancements made

  • Add /refresh-personas command and default persona configurable #1405 (@dlqqq)

Bugs fixed

  • Bump @jupyter/chat dependency and regenerate yarn.lock, pin cohere to <5.16 #1412 (@andrii-i)
  • Return error message when the completion model is not specified for the Jupyternaut persona #1408 (@srdas)

Contributors to this release

(GitHub contributors page for this release)

@andrii-i | @dlqqq | @ellisonbg | @srdas

v3.0.0beta3

07 Jul 15:32

Choose a tag to compare

v3.0.0beta3 Pre-release
Pre-release

3.0.0beta3

(Full Changelog)

Enhancements made

Contributors to this release

(GitHub contributors page for this release)

@3coins | @andrii-i | @dlqqq | @ellisonbg

v3.0.0beta2

28 Jun 00:28

Choose a tag to compare

v3.0.0beta2 Pre-release
Pre-release

3.0.0beta2

(Full Changelog)

Enhancements made

Contributors to this release

(GitHub contributors page for this release)

@andrii-i | @ellisonbg | @fperez

v3.0.0beta1

26 Jun 16:47

Choose a tag to compare

v3.0.0beta1 Pre-release
Pre-release

3.0.0beta1

(Full Changelog)

Enhancements made

Maintenance and upkeep improvements

Contributors to this release

(GitHub contributors page for this release)

@3coins | @dlqqq | @ellisonbg | @haofan | @pre-commit-ci

v3.0.0b0

10 Jun 22:54

Choose a tag to compare

v3.0.0b0 Pre-release
Pre-release

3.0.0b0

This is the first beta release of Jupyter AI v3! We've completed a majority of the new APIs & integrations that we plan to use in v3.0.0. It's now time for us to build features, fix bugs, (greatly) improve the UI, and make Jupyternaut a powerful default AI agent. We plan to move very quickly in the next couple of weeks to make v3.0.0 available to users as soon as we can. If everything works out, we will release v3.0.0 by the end of June. πŸ’ͺ

This release notably implements the "stop streaming" button that existed in Jupyter AI v2 & enhances the performance by removing thousands of lines of old v2 code. Besides the slash command capabilities (which will be implemented as agent tools in beta), Jupyter AI v3 now has feature parity with Jupyter AI v2. πŸŽ‰

(Full Changelog)

Enhancements made

Maintenance and upkeep improvements

Documentation improvements

  • Updated documentation for using Ollama with cell magics on non-default port #1370 (@srdas)

Contributors to this release

(GitHub contributors page for this release)

@brichet | @dlqqq | @srdas

v3.0.0a1

04 Jun 21:34

Choose a tag to compare

v3.0.0a1 Pre-release
Pre-release

3.0.0a1

Hey folks! This v3 release notably introduces AI personas that replace chat handlers, fixes various usability issues encountered in v3.0.0a0, and upgrades to LangChain v0.3 & Pydantic v2. πŸŽ‰

AI personas

AI personas re-define how new messages are handled in Jupyter AI, and supersede the previous convention of "chat handlers" used in v2. AI personas are like "chatbots" available in every chat instance and can use any model/framework of their choice.

  • Each chat can have any number of AI personas.
  • You have to @-mention a persona to get it to reply. The available personas will be listed after typing @, which shows a menu listing the available personas.
  • Currently, Jupyter AI only has a single AI persona by default: Jupyternaut.
  • Each message may mention any number of AI personas, so you can send the same question to multiple personas.
  • Personas can have a custom name & avatar.
  • Custom AI personas can be added to your Jupyter AI instance by writing & installing a new package that provides custom AI personas as entry points.
  • We plan to add more AI personas by default and/or provide library packages that add AI personas.
  • More information will be available in the v3 user documentation once it is ready.

There's also a new v3 documentation page! Currently, only the developer documentation has been updated. Please read through the v3 developer docs if you are interested in writing your own AI personas. πŸ€—

Planned future work

  • Jupyternaut in v3 is similar to Jupyternaut in v2, but currently lacks slash commands. We are planning to replace slash commands with agentic tools called by the chat model directly.

    • In other words, Jupyternaut will infer your intent based on your prompt and automatically learn/generate/fix files by v3.0.0.
    • We will develop this once we begin work on providing APIs for agentic tool use and integrating MCP support after v3.0.0b0 (beta development phase).
  • See the roadmap issue & GitHub milestones for more details on our future work: #1052

(Full Changelog)

Enhancements made

  • Introduce AI persona framework #1341 (@dlqqq)
  • Separate BaseProvider for faster import #1338 (@krassowski)
  • Added new gpt-4.1 models #1325 (@srdas)
  • Introduce AI persona framework #1324 (@dlqqq)
  • [v3] Upgrade to jupyterlab-chat v0.8, restore context command completions #1290 (@dlqqq)
  • Added help text fields for embedding providers in the AI Setting page #1288 (@srdas)
  • Allow chat handlers to be initialized in any order #1268 (@Darshan808)
  • Allow embedding model fields, fix coupled model fields, add custom OpenAI provider #1264 (@srdas)
  • Refactor Chat Handlers to Simplify Initialization #1257 (@Darshan808)
  • Make Native Chat Handlers Overridable via Entry Points #1249 (@Darshan808)
  • Upgrade to LangChain v0.3 and Pydantic v2 #1201 (@dlqqq)
  • Show error icon near cursor on inline completion errors #1197 (@Darshan808)

Bugs fixed

  • Fix the path missing in inline completion request when there is no kernel #1361 (@krassowski)
  • Periodically update the persona awareness to keep it alive #1358 (@brichet)
  • Added a local identity provider. #1333 (@3coins)
  • Handle missing field in config.json on version upgrade #1330 (@srdas)
  • [3.x] Expand edge case handling in ConfigManager #1322 (@dlqqq)
  • Open the AI settings in a side panel in Notebook application #1309 (@brichet)
  • Add default_completions_model trait #1303 (@srdas)
  • Pass model_parameters trait to embedding & completion models #1298 (@srdas)
  • Migrate old config schemas, fix v2.31.0 regression #1294 (@dlqqq)
  • Remove error log emitted when FAISS file is absent #1287 (@srdas)
  • Ensure magics package version is consistent in future releases #1280 (@dlqqq)
  • Correct minimum versions in dependency version ranges #1272 (@dlqqq)
  • Allow embedding model fields, fix coupled model fields, add custom OpenAI provider #1264 (@srdas)
  • Enforce path imports for MUI icons, upgrade to ESLint v8 #1225 (@krassowski)
  • Fixes duplicate api key being passed inΒ openrouter.py #1216 (@srdas)
  • Fix MUI theme in Jupyter AI Settings #1210 (@MUFFANUJ)
  • Fix Amazon Nova support (use StrOutputParser) #1202 (@dlqqq)
  • Remove remaining shortcut to focus the chat input #1186 (@brichet)
  • Fix specifying empty list in provider and model allow/denylists #1185 (@MaicoTimmerman)
  • Reply gracefully when chat model is not selected #1183 (@dlqqq)

Maintenance and upkeep improvements

Documentation improvements

Read more