Releases: jupyterlab/jupyter-ai
v3.0.0beta7
3.0.0beta7
This release notably upgrades to jupyterlab-chat==0.17.0
, which is in the process of being published onto Conda Forge. This release is being targeted as the first V3 release to be published onto Conda Forge! π
This release also fixes a bug that prevented some users from starting Jupyter AI locally on v3.0.0b6
. Thank you @andreyvelich for contributing that fix so quickly! πͺ
Finally, we've also added some enhancements & fixes for the magic commands & the model parameters UI. π€
Enhancements made
- Upgrade to
jupyterlab-chat
v0.17.0 #1480 (@dlqqq) - Add
api_base
to common model parameters #1478 (@jonahjung22) - [magics] Add options to include the API url & key with alias #1477 (@srdas)
- Simplify model parameter REST API #1475 (@jonahjung22)
- Add model parameter type dropdown #1473 (@jonahjung22)
- [magics] Add
--api-base
and--api-key-name
arguments #1471 (@srdas) - Show the AI settings in the right area with Jupyter Notebook #1470 (@jtpio)
Bugs fixed
- Fix empty directory for Jupyter AI config #1472 (@andreyvelich)
Maintenance and upkeep improvements
- fixes directory of pr template #1474 (@jonahjung22)
Contributors to this release
(GitHub contributors page for this release)
@andreyvelich | @brichet | @dlqqq | @ellisonbg | @jonahjung22 | @jtpio | @srdas
v3.0.0beta6
3.0.0beta6
This release includes several major upgrades to Jupyter AI v3, most notably migrating from Langchain to LiteLLM.
-
π Jupyter AI now provides >1000 LLMs out-of-the-box, without requiring an optional dependency for most providers. The only optional dependency that you may need is
boto3
, which is required for Amazon Bedrock models. -
π Jupyter AI is significantly faster to install and start. The Jupyter AI server extension startup time has been reduced from ~10000ms to ~2500ms (-75 pp). The remaining startup latency mostly comes from the time it takes to import
jupyter_ai
. We plan to improve this further by iterating on #1115. -
πͺ We have completely overhauled the AI settings page & simplified the model configuration process. The new AI settings page allows you to type in any LiteLLM model ID, without being restricted to the suggestions that appear as a popup. This will allow you to use the latest LLMs as soon as they are released, even if they have not yet been added to the model lists in our source code.
- By v3, users will also be able to define custom model parameters, which are passed directly as keyword arguments to
litellm.acompletion()
. Users will not have to request maintainers to add fields to models anymore.
- By v3, users will also be able to define custom model parameters, which are passed directly as keyword arguments to
-
π Finally, we've greatly simplified the process of providing your API keys. All API keys can now be defined as environment variables directly passed to
jupyter-lab
. You may also define API keys locally in the.env
file at your workspace root, which is used throughout all of Jupyter AI. You can edit the.env
file directly, use the UI we provide in the AI settings page.
There are some minor breaking changes:
-
The path local personas are loaded from has been moved from
.jupyter/
to.jupyter/personas
. -
The new "model parameters" section has a couple of bugs that will be fixed in future pre-releases.
-
We have temporary hidden the "inline completion model" section until we refactor the backend to work with LiteLLM. That work is being tracked in #1431. Contributions welcome.
-
We have also hidden the "embedding model" section. We plan for Jupyternaut to automatically gather the context it needs entirely through agentic tool-calling, which may remove the need for a vector store & embedding model. This may change in the future depending on the results on this effort.
Enhancements made
- PR Template #1446 (@jonahjung22)
- Load local personas from
.jupyter/personas
instead of.jupyter/
#1443 (@andrii-i) - Migrate from LangChain to LiteLLM (major upgrade) #1426 (@dlqqq)
Contributors to this release
(GitHub contributors page for this release)
@andrii-i | @cszhbo | @dlqqq | @jonahjung22 | @srdas
v3.0.0beta5
3.0.0beta5
Enhancements made
- Add file attachment directly to JupyternautPersona when file is included in message #1419 (@joadoumie)
- Add VertexAI model provider #1417 (@anthonyhungnguyen)
Maintenance and upkeep improvements
Contributors to this release
v2.31.6
2.31.6
Enhancements made
- Add VertexAI model provider #1417 (@anthonyhungnguyen)
- Refresh the list of supported Gemini models. #1381 (@haofan)
Maintenance and upkeep improvements
Documentation improvements
Contributors to this release
v3.0.0beta4
3.0.0beta4
Enhancements made
Bugs fixed
- Bump
@jupyter/chat
dependency and regenerateyarn.lock
, pincohere
to<5.16
#1412 (@andrii-i) - Return error message when the completion model is not specified for the Jupyternaut persona #1408 (@srdas)
Contributors to this release
(GitHub contributors page for this release)
@andrii-i | @dlqqq | @ellisonbg | @srdas
v3.0.0beta3
3.0.0beta3
Enhancements made
- Bump jupyterlab-chat version to v0.16.0 #1406 (@andrii-i)
- Update user message routing rules #1399 (@3coins)
- Use
uv
, overhaul dev setup, update contributor docs #1392 (@dlqqq)
Contributors to this release
(GitHub contributors page for this release)
@3coins | @andrii-i | @dlqqq | @ellisonbg
v3.0.0beta2
3.0.0beta2
Enhancements made
- Add error handling for persona loading failures #1397 (@ellisonbg)
- Add ignore globs for hidden files in CM config #1396 (@ellisonbg)
- Hide backslashes in
@file
paths with spaces #1390 (@andrii-i) - Load personas dynamically from
.jupyter
dir #1380 (@fperez)
Contributors to this release
v3.0.0beta1
3.0.0beta1
Enhancements made
- Upgrade to Jupyter Chat v0.15.0 #1389 (@dlqqq)
- Add MCP config to the .jupyter directory #1385 (@ellisonbg)
- Added toolkit models #1382 (@3coins)
- Refresh the list of supported Gemini models. #1381 (@haofan)
- Allow personas to get chat path and directory #1379 (@dlqqq)
- Add functions for finding the .jupyter directory or the workspace directory #1376 (@ellisonbg)
Maintenance and upkeep improvements
Contributors to this release
(GitHub contributors page for this release)
@3coins | @dlqqq | @ellisonbg | @haofan | @pre-commit-ci
v3.0.0b0
3.0.0b0
This is the first beta release of Jupyter AI v3! We've completed a majority of the new APIs & integrations that we plan to use in v3.0.0. It's now time for us to build features, fix bugs, (greatly) improve the UI, and make Jupyternaut a powerful default AI agent. We plan to move very quickly in the next couple of weeks to make v3.0.0 available to users as soon as we can. If everything works out, we will release v3.0.0 by the end of June. πͺ
This release notably implements the "stop streaming" button that existed in Jupyter AI v2 & enhances the performance by removing thousands of lines of old v2 code. Besides the slash command capabilities (which will be implemented as agent tools in beta), Jupyter AI v3 now has feature parity with Jupyter AI v2. π
Enhancements made
Maintenance and upkeep improvements
- Raise
jupyterlab-chat
version ceiling #1373 (@dlqqq) - Remove unused code from v3
main
branch #1369 (@dlqqq)
Documentation improvements
Contributors to this release
v3.0.0a1
3.0.0a1
Hey folks! This v3 release notably introduces AI personas that replace chat handlers, fixes various usability issues encountered in v3.0.0a0, and upgrades to LangChain v0.3 & Pydantic v2. π
AI personas
AI personas re-define how new messages are handled in Jupyter AI, and supersede the previous convention of "chat handlers" used in v2. AI personas are like "chatbots" available in every chat instance and can use any model/framework of their choice.
- Each chat can have any number of AI personas.
- You have to
@
-mention a persona to get it to reply. The available personas will be listed after typing@
, which shows a menu listing the available personas. - Currently, Jupyter AI only has a single AI persona by default: Jupyternaut.
- Each message may mention any number of AI personas, so you can send the same question to multiple personas.
- Personas can have a custom name & avatar.
- Custom AI personas can be added to your Jupyter AI instance by writing & installing a new package that provides custom AI personas as entry points.
- We plan to add more AI personas by default and/or provide library packages that add AI personas.
- More information will be available in the v3 user documentation once it is ready.
There's also a new v3 documentation page! Currently, only the developer documentation has been updated. Please read through the v3 developer docs if you are interested in writing your own AI personas. π€
- Link to new v3 developer docs: https://jupyter-ai.readthedocs.io/en/v3/developers/index.html
Planned future work
-
Jupyternaut in v3 is similar to Jupyternaut in v2, but currently lacks slash commands. We are planning to replace slash commands with agentic tools called by the chat model directly.
- In other words, Jupyternaut will infer your intent based on your prompt and automatically learn/generate/fix files by v3.0.0.
- We will develop this once we begin work on providing APIs for agentic tool use and integrating MCP support after v3.0.0b0 (beta development phase).
-
See the roadmap issue & GitHub milestones for more details on our future work: #1052
Enhancements made
- Introduce AI persona framework #1341 (@dlqqq)
- Separate
BaseProvider
for faster import #1338 (@krassowski) - Added new
gpt-4.1
models #1325 (@srdas) - Introduce AI persona framework #1324 (@dlqqq)
- [v3] Upgrade to jupyterlab-chat v0.8, restore context command completions #1290 (@dlqqq)
- Added help text fields for embedding providers in the AI Setting page #1288 (@srdas)
- Allow chat handlers to be initialized in any order #1268 (@Darshan808)
- Allow embedding model fields, fix coupled model fields, add custom OpenAI provider #1264 (@srdas)
- Refactor Chat Handlers to Simplify Initialization #1257 (@Darshan808)
- Make Native Chat Handlers Overridable via Entry Points #1249 (@Darshan808)
- Upgrade to LangChain v0.3 and Pydantic v2 #1201 (@dlqqq)
- Show error icon near cursor on inline completion errors #1197 (@Darshan808)
Bugs fixed
- Fix the path missing in inline completion request when there is no kernel #1361 (@krassowski)
- Periodically update the persona awareness to keep it alive #1358 (@brichet)
- Added a local identity provider. #1333 (@3coins)
- Handle missing field in config.json on version upgrade #1330 (@srdas)
- [3.x] Expand edge case handling in ConfigManager #1322 (@dlqqq)
- Open the AI settings in a side panel in Notebook application #1309 (@brichet)
- Add
default_completions_model
trait #1303 (@srdas) - Pass
model_parameters
trait to embedding & completion models #1298 (@srdas) - Migrate old config schemas, fix v2.31.0 regression #1294 (@dlqqq)
- Remove error log emitted when FAISS file is absent #1287 (@srdas)
- Ensure magics package version is consistent in future releases #1280 (@dlqqq)
- Correct minimum versions in dependency version ranges #1272 (@dlqqq)
- Allow embedding model fields, fix coupled model fields, add custom OpenAI provider #1264 (@srdas)
- Enforce path imports for MUI icons, upgrade to ESLint v8 #1225 (@krassowski)
- Fixes duplicate api key being passed inΒ
openrouter.py
#1216 (@srdas) - Fix MUI theme in Jupyter AI Settings #1210 (@MUFFANUJ)
- Fix Amazon Nova support (use
StrOutputParser
) #1202 (@dlqqq) - Remove remaining shortcut to focus the chat input #1186 (@brichet)
- Fix specifying empty list in provider and model allow/denylists #1185 (@MaicoTimmerman)
- Reply gracefully when chat model is not selected #1183 (@dlqqq)
Maintenance and upkeep improvements
- Revert "Introduce AI persona framework (#1324)" #1340 (@dlqqq)
- Add
pyupgrade --py39-plus
andautoflake
topre-commit
config #1329 (@rominf) - Ensure magics package version is consistent in future releases #1280 (@dlqqq)
- Correct minimum versions in dependency version ranges #1272 (@dlqqq)
- Remove the dependency on
jupyterlab
#1234 (@jtpio) - Upgrade to
actions/cache@v4
#1228 (@dlqqq) - Typo in comment #1217 (@Carreau)
Documentation improvements
- Overhaul v3 developer documentation #1344 (@dlqqq)
- Update documentation to show usage with OpenRouter API and URL #1318 (@srdas)
- Add information about ollama - document it as an available provider and provide clearer troubleshooting help. #1235 (@fperez)
- Add documentation for vLLM usage #1232 (@srdas)
- Update documentation for setting API keys without revealing them #1224 (@srdas)
- Typo in comment #1217 (@Carreau)
- Docs: Update installation steps to work in bash & zsh #1211 (@srdas)
- Update developer docs on Pydantic compatibility #1204 (@dlqqq)
- Update documentation to add usage of
Openrouter
#1193 (@srdas) - Fix dev install steps in contributor docs [#1188](https://github.com/jupyterlab/jupyter-a...