[WIP] Migrate from LangChain to LiteLLM (major upgrade) #1426
+2,758
−4,578
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR contains all of the code changes required for the complete migration from LangChain to LiteLLM as the model library used by Jupyter AI, as proposed in #1418. We will iterate on this through smaller PRs targeting the
litellm
branch. These "sub-PRs" are listed below.This PR will be merged into the
main
branch once the "issues remaining" (listed below) are addressed and thelitellm
branch is stable for general use. Until then, this PR will be marked as a draft.Issues closed
Closes [v3-beta] EPIC: Define new API for models & model providers #1312
jupyter_ai
import time is too slow #1115Closes Error on chat openai with litellm api for self hosted llm #1308
Closes Add Deepseek support #1434
Issues remaining
jupyter_ai
import time to confirmjupyter_ai
import time is too slow #1115 is resolved.Sub-PRs merged
Summary of changes
LangChain providers (e.g.
ChatOpenAI
class) and provider dependencies (e.g.langchain-openai
optional dependency) have been removed. The only optional dependency isboto3
, which is required for using Bedrock models with LiteLLM.Jupyter AI no longer spends 3000-4000ms loading providers from entry points on server startup. Around 1000 models from several providers are available for use without loading entry points or installing optional dependencies.
Model IDs now follow the LiteLLM syntax,
<provider-name>/<model-name>
.The AI settings page has been rewritten to be more usable and reliable.