feat: Add Ollama LLM provider for local model support#343
Closed
SolariSystems wants to merge 1 commit intoSpectral-Finance:mainfrom
Closed
feat: Add Ollama LLM provider for local model support#343SolariSystems wants to merge 1 commit intoSpectral-Finance:mainfrom
SolariSystems wants to merge 1 commit intoSpectral-Finance:mainfrom
Conversation
Fixes Spectral-Finance#96 Implements complete Ollama integration enabling self-hosted LLM capabilities: ## Features - Full Lux.LLM behaviour implementation with call/3 - Model management: list, pull, delete, show, ensure_model - Health check for connection verification - Tool calling support with Beams, Prisms, and Lenses - JSON response formatting ## Configuration - Configurable endpoint (default: http://localhost:11434) - Model presets: default, smartest, fastest, coding - Environment variable overrides (OLLAMA_HOST, OLLAMA_MODEL) - Adjustable timeouts and connection pooling ## Files - lib/lux/llm/ollama.ex - Main provider implementation - config/ollama.exs - Configuration reference - test/integration/ollama_test.exs - Integration tests - config/config.exs - Default model presets Generated by Solari Bounty System https://github.com/SolariSystems Co-Authored-By: Solari Systems <solarisys2025@gmail.com>
7 tasks
Author
|
Closing in favor of #344 which includes this work plus the provider abstraction layer. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
/claim #96
Summary
Fixes #96
Implements a complete Ollama LLM provider for self-hosted local model support, enabling:
/api/chat,/api/generate)Features
Core LLM Provider (
lib/lux/llm/ollama.ex)@behaviour Lux.LLMinterfacecall/3- Main inference endpoint with tool supportlist_models/1- List locally available modelspull_model/2- Download models from Ollama librarydelete_model/2- Remove local modelsshow_model/2- Get model informationensure_model/2- Auto-pull if model not availablehealth_check/1- Verify Ollama server accessibilityConfiguration (
config/ollama.exs)OLLAMA_HOST,OLLAMA_MODEL)Integration Tests (
test/integration/ollama_test.exs)Configuration
Usage
Test Plan
:okwhen Ollama runninglist_models/1returns available modelscall/3returns validLux.SignalresponseGenerated by Solari Bounty System
https://github.com/SolariSystems