Skip to content

feat: Add universal LLM provider abstraction layer#344

Open
SolariSystems wants to merge 3 commits intoSpectral-Finance:mainfrom
SolariSystems:feat/provider-abstraction-layer
Open

feat: Add universal LLM provider abstraction layer#344
SolariSystems wants to merge 3 commits intoSpectral-Finance:mainfrom
SolariSystems:feat/provider-abstraction-layer

Conversation

@SolariSystems
Copy link

@SolariSystems SolariSystems commented Dec 5, 2025

/claim #99

Summary

Fixes #99

Implements a comprehensive LLM provider management system that enables:

  • Universal Provider Interface - Single API to call any LLM provider
  • Automatic Model Selection - Choose providers based on task type
  • Smart Fallback Handling - Automatic failover when providers fail
  • Cost Tracking - Monitor API costs per provider
  • Performance Monitoring - Track latency and success rates

Features

ProviderManager GenServer (lib/lux/llm/provider_manager.ex)

  • call/3 - Universal LLM call with automatic provider selection
  • list_providers/0 - List all providers and their status
  • recommend_provider/1 - Get best provider for a task type
  • estimate_cost/3 - Estimate API cost before calling
  • get_stats/0 - Get performance and cost statistics

Provider Configuration (config/providers.exs)

  • Default provider and fallback chain settings
  • Cost-per-token configuration for budgeting
  • Task-to-provider mappings (:coding, :creative, :analysis, :local, :cheap, :fast)
  • Environment variable support

Unit Tests (test/lux/llm/provider_manager_test.exs)

  • Provider listing tests
  • Task recommendation tests
  • Cost estimation tests
  • Statistics tracking tests

Usage

# Simple call with default provider
ProviderManager.call("What is 2+2?")

# Task-based automatic selection
ProviderManager.call("Generate code", task: :coding)

# With explicit fallback chain
ProviderManager.call("Complex task", fallback: [:openai, :anthropic, :ollama])

# Check statistics
ProviderManager.get_stats()

Supported Providers

Provider Use Case
OpenAI General, Analysis
Anthropic Coding, Creative
Ollama Local, Budget
TogetherAI Budget, Fast
Mira General

Test Plan

  • Provider listing returns all 5 providers
  • Task recommendations return appropriate providers
  • Cost estimation calculates correctly
  • Statistics tracking updates properly

Fixes Spectral-Finance#96

Implements complete Ollama integration enabling self-hosted LLM capabilities:

## Features
- Full Lux.LLM behaviour implementation with call/3
- Model management: list, pull, delete, show, ensure_model
- Health check for connection verification
- Tool calling support with Beams, Prisms, and Lenses
- JSON response formatting

## Configuration
- Configurable endpoint (default: http://localhost:11434)
- Model presets: default, smartest, fastest, coding
- Environment variable overrides (OLLAMA_HOST, OLLAMA_MODEL)
- Adjustable timeouts and connection pooling

## Files
- lib/lux/llm/ollama.ex - Main provider implementation
- config/ollama.exs - Configuration reference
- test/integration/ollama_test.exs - Integration tests
- config/config.exs - Default model presets

Generated by Solari Bounty System
https://github.com/SolariSystems

Co-Authored-By: Solari Systems <solarisys2025@gmail.com>
Fixes Spectral-Finance#99

Implements a comprehensive provider management system for LLM integrations:

## Features

- **ProviderManager GenServer** (`lib/lux/llm/provider_manager.ex`)
  - Universal interface to call any registered LLM provider
  - Automatic provider and model selection based on task type
  - Intelligent fallback handling when providers fail
  - Cost tracking and estimation per provider
  - Performance monitoring (latency, success/failure rates)
  - Statistics collection and reporting

- **Provider Configuration** (`config/providers.exs`)
  - Default provider and fallback chain settings
  - Cost-per-token configuration for budgeting
  - Task-to-provider mappings (coding, creative, analysis, etc.)
  - Environment variable support for runtime configuration

- **Unit Tests** (`test/lux/llm/provider_manager_test.exs`)
  - Tests for provider listing and status checking
  - Task recommendation tests
  - Cost estimation tests
  - Statistics tracking tests

## Usage

```elixir
# Simple call with default provider
ProviderManager.call("What is 2+2?")

# Task-based automatic selection
ProviderManager.call("Generate code", task: :coding)

# With fallback chain
ProviderManager.call("Complex task", fallback: [:openai, :anthropic, :ollama])

# Get statistics
ProviderManager.get_stats()
```

## Supported Providers

- OpenAI (GPT-4o, GPT-4o-mini)
- Anthropic (Claude 3)
- Ollama (local models)
- TogetherAI
- Mira

Generated by Solari Bounty System
https://github.com/SolariSystems

Co-Authored-By: Solari Systems <solarisys2025@gmail.com>
Enhanced provider manager with:

- Circuit breaker pattern: Automatic failover when providers fail
  (threshold: 5 failures, 1min recovery)
- LRU caching: Response caching with TTL and size limits (1000 max)
- Rate limiting: Per-provider token bucket (60/min cloud, 1000/min local)
- Health checks: Periodic provider availability monitoring
- Latency percentiles: p50/p95/p99 tracking for SLA monitoring
- Model-specific pricing: Accurate cost tracking per model variant
- Token extraction: Real usage tracking from API responses

Tests expanded from 10 to 40+ covering all new features.
@SolariSystems SolariSystems force-pushed the feat/provider-abstraction-layer branch from e7b02fa to ff1d294 Compare January 11, 2026 03:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

LLM Provider Abstraction Layer $600

1 participant