You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add base API URL field for Ollama and OpenAI embedding models (#1136)
* Base API URL added for embedding models
Jupyter AI currently allows the user to call a model at a URL (location) different from the default one by specifying a selected Base API URL. This can be done for Ollama, OpenAI provider models. However, for these providers, there is no way to change the API URL for embedding models when using the `/learn` command in RAG mode. This PR adds an extra field to make this feasible.
Tested as follows for Ollama:
[1] Start the Ollama system from port 11435 instead 11434 (the default):
`OLLAMA_HOST=127.0.0.1:11435 ollama serve`
[2] Set the Base API URL:
[3] Check that the new API URL works:
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* allow embedding model fields to be saved
* exclude empty str fields from config manager
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: David L. Qiu <[email protected]>
0 commit comments