Skip to content

feat: Add Ollama LLM provider for local model support#343

Closed
SolariSystems wants to merge 1 commit intoSpectral-Finance:mainfrom
SolariSystems:feat/ollama-integration
Closed

feat: Add Ollama LLM provider for local model support#343
SolariSystems wants to merge 1 commit intoSpectral-Finance:mainfrom
SolariSystems:feat/ollama-integration

Conversation

@SolariSystems
Copy link

/claim #96

Summary

Fixes #96

Implements a complete Ollama LLM provider for self-hosted local model support, enabling:

  • Local Model Inference via Ollama API (/api/chat, /api/generate)
  • Model Management - list, pull, delete, show models
  • Tool Calling with Beams, Prisms, and Lenses
  • Health Checking and automatic model availability verification
  • Configuration via environment variables and config files

Features

Core LLM Provider (lib/lux/llm/ollama.ex)

  • Implements @behaviour Lux.LLM interface
  • call/3 - Main inference endpoint with tool support
  • list_models/1 - List locally available models
  • pull_model/2 - Download models from Ollama library
  • delete_model/2 - Remove local models
  • show_model/2 - Get model information
  • ensure_model/2 - Auto-pull if model not available
  • health_check/1 - Verify Ollama server accessibility

Configuration (config/ollama.exs)

  • Model presets: default, smartest, fastest, coding, embeddings
  • Environment variable support (OLLAMA_HOST, OLLAMA_MODEL)
  • HTTP client configuration (timeouts, pool size)

Integration Tests (test/integration/ollama_test.exs)

  • Health check tests
  • Model listing tests
  • Inference tests (text and JSON responses)
  • Error handling tests
  • Conditional skip when Ollama not available

Configuration

config :lux, :ollama_models,
  default: "llama3.2",
  smartest: "llama3.2:70b",
  fastest: "llama3.2:1b",
  coding: "codellama"

config :lux, Lux.LLM.Ollama,
  endpoint: "http://localhost:11434",
  receive_timeout: 120_000

Usage

# Basic inference
Lux.LLM.Ollama.call("What is the capital of France?", [], %{})

# With specific model
Lux.LLM.Ollama.call("Explain quantum computing", [], %{model: "llama3.2:70b"})

# List models
Lux.LLM.Ollama.list_models()

# Ensure model is available (auto-pull if needed)
Lux.LLM.Ollama.ensure_model("mistral")

Test Plan

  • Health check returns :ok when Ollama running
  • list_models/1 returns available models
  • call/3 returns valid Lux.Signal response
  • JSON response mode returns structured output
  • Error handling for non-existent models
  • Tests skip gracefully when Ollama not available

Generated by Solari Bounty System
https://github.com/SolariSystems

Fixes Spectral-Finance#96

Implements complete Ollama integration enabling self-hosted LLM capabilities:

## Features
- Full Lux.LLM behaviour implementation with call/3
- Model management: list, pull, delete, show, ensure_model
- Health check for connection verification
- Tool calling support with Beams, Prisms, and Lenses
- JSON response formatting

## Configuration
- Configurable endpoint (default: http://localhost:11434)
- Model presets: default, smartest, fastest, coding
- Environment variable overrides (OLLAMA_HOST, OLLAMA_MODEL)
- Adjustable timeouts and connection pooling

## Files
- lib/lux/llm/ollama.ex - Main provider implementation
- config/ollama.exs - Configuration reference
- test/integration/ollama_test.exs - Integration tests
- config/config.exs - Default model presets

Generated by Solari Bounty System
https://github.com/SolariSystems

Co-Authored-By: Solari Systems <solarisys2025@gmail.com>
@SolariSystems
Copy link
Author

Closing in favor of #344 which includes this work plus the provider abstraction layer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Local Model Support via Ollama $400

1 participant