Skip to content

Conversation

@axosoft-ramint
Copy link
Contributor

Adds support for Ollama as a provider (requires Ollama server to be configured and running).

Prompts for URL when chosen or used as a provider and one is not configured.

Shows messaging when no models are installed.

To use:

  1. Download Ollama: https://ollama.com/download
  2. Install a model from the library: https://ollama.com/library (note: you can use any terminal and write, for example, ollama run llama3.3 if Ollama is installed and configured)
  3. Be sure Ollama is running, then use the "switch AI model" flow and choose Ollama, then put in your server URL (default is http://localhost:11434 if you're running it locally.
  4. Your installed models should show up in a list. Choose one and you're good to go.

Closes #3311

Co-authored-by: Ramin Tadayon <[email protected]>
@eamodio eamodio merged commit 3782aad into main Apr 29, 2025
Copy link
Member

@eamodio eamodio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

@SergioNR
Copy link

Just passing by to show appreciation for this feature!

Thank you Gitkraken team

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Local AI providers

4 participants