Skip to content
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion AGENTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -224,6 +224,12 @@ bin/test test/integration/open_ai/
- Access 200+ models through single API
- Provider preferences via `provider: { order: [...] }`

### RubyLLM
- Uses `ruby_llm` gem for unified access to 15+ providers
- RubyLLM manages its own API keys via `RubyLLM.configure`
- Model ID determines which provider is used automatically
- Supports prompts, embeddings, tool calling, and streaming

## Common Gotchas

1. **Generation is lazy** - Nothing happens until `generate_now` or `prompt_later`
Expand Down Expand Up @@ -253,7 +259,7 @@ bin/rubocop

- Ruby 3.1+
- Rails 7.2+ / 8.0+ / 8.1+
- Provider gems (optional): `openai`, `anthropic`
- Provider gems (optional): `openai`, `anthropic`, `ruby_llm`

## Links

Expand Down
8 changes: 7 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,9 @@ bundle add openai

# OpenRouter (uses OpenAI-compatible API)
bundle add openai

# RubyLLM (unified API for 15+ providers)
bundle add ruby_llm
```

### Setup
Expand Down Expand Up @@ -119,12 +122,15 @@ development:
ollama:
service: "Ollama"
model: "llama3.2"

ruby_llm:
service: "RubyLLM"
```

## Features

- **Agent-Oriented Programming**: Build AI applications using familiar Rails patterns
- **Multiple Provider Support**: Works with OpenAI, Anthropic, Ollama, and more
- **Multiple Provider Support**: Works with OpenAI, Anthropic, Ollama, RubyLLM, and more
- **Action-Based Design**: Define agent capabilities through actions
- **View Templates**: Use ERB templates for prompts (text, JSON, HTML)
- **Streaming Support**: Real-time response streaming with ActionCable
Expand Down
4 changes: 3 additions & 1 deletion Rakefile
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@ require "rake/testtask"

Rake::TestTask.new(:test) do |t|
t.libs << "test"
t.pattern = "test/**/*_test.rb"
t.test_files = FileList["test/**/*_test.rb"]
.exclude("test/**/integration_test.rb")
.exclude("test/dummy/tmp/**/*")
t.verbose = true
end

Expand Down
1 change: 1 addition & 0 deletions activeagent.gemspec
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ Gem::Specification.new do |spec|

spec.add_development_dependency "anthropic", "~> 1.12"
spec.add_development_dependency "openai", "~> 0.34"
spec.add_development_dependency "ruby_llm", ">= 1.0"

spec.add_development_dependency "capybara", "~> 3.40"
spec.add_development_dependency "cuprite", "~> 0.15"
Expand Down
1 change: 1 addition & 0 deletions docs/actions/embeddings.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,3 +84,4 @@ Different models produce different embedding dimensions:
- [Testing](/framework/testing) - Test embedding functionality
- [OpenAI Provider](/providers/open_ai) - OpenAI embedding models
- [Ollama Provider](/providers/ollama) - Local embedding generation
- [RubyLLM Provider](/providers/ruby_llm) - Embeddings via RubyLLM's unified API
1 change: 1 addition & 0 deletions docs/actions/mcps.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ Connect agents to external services via [Model Context Protocol](https://modelco
| **Anthropic** | ⚠️ | Beta |
| **OpenRouter** | 🚧 | In development |
| **Ollama** | ❌ | Not supported |
| **RubyLLM** | ❌ | Not supported (use provider-specific integration) |
| **Mock** | ❌ | Not supported |

## MCP Format
Expand Down
1 change: 1 addition & 0 deletions docs/actions/structured_output.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ Two JSON response formats:
| **Anthropic** | 🟦 | ❌ | Emulated via prompt engineering technique |
| **OpenRouter** | 🟩 | 🟩 | Native support, depends on underlying model |
| **Ollama** | 🟨 | 🟨 | Model-dependent, support varies by model |
| **RubyLLM** | 🟨 | 🟨 | Depends on underlying provider/model |
| **Mock** | 🟩 | 🟩 | Accepted but not validated or enforced |

## JSON Object Mode
Expand Down
1 change: 1 addition & 0 deletions docs/actions/tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ The LLM calls `get_weather` automatically when it needs weather data, and uses t
| **Anthropic** | 🟩 | 🟩 | Full support for built-in tools |
| **OpenRouter** | 🟩 | ❌ | Model-dependent capabilities |
| **Ollama** | 🟩 | ❌ | Model-dependent capabilities |
| **RubyLLM** | 🟩 | ❌ | Depends on underlying provider/model |
| **Mock** | 🟦 | ❌ | Accepted but not enforced |

For **MCP (Model Context Protocol)** support, see the [MCP documentation](/actions/mcps).
Expand Down
4 changes: 2 additions & 2 deletions docs/framework.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ Actions call `prompt()` or `embed()` to configure requests. Callbacks manage con

ActiveAgent integrates with Rails features and AI capabilities:

- **[Providers](/providers)** - Swap AI services (OpenAI, Anthropic, Ollama, OpenRouter)
- **[Providers](/providers)** - Swap AI services (OpenAI, Anthropic, Ollama, OpenRouter, RubyLLM)
- **[Instructions](/agents/instructions)** - System prompts from templates or strings
- **[Callbacks](/agents/callbacks)** - Lifecycle hooks for context and logging
- **[Tools](/actions/tools)** - Agent methods as AI-callable functions
Expand All @@ -133,7 +133,7 @@ ActiveAgent integrates with Rails features and AI capabilities:
- [Generation](/agents/generation) - Synchronous and asynchronous execution
- [Instructions](/agents/instructions) - System prompts and behavior guidance
- [Messages](/actions/messages) - Conversation context with multimodal support
- [Providers](/providers) - OpenAI, Anthropic, Ollama, OpenRouter configuration
- [Providers](/providers) - OpenAI, Anthropic, Ollama, OpenRouter, RubyLLM configuration

**Advanced:**
- [Tools](/actions/tools) - AI-callable Ruby methods and MCP integration
Expand Down
3 changes: 2 additions & 1 deletion docs/framework/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ Common settings available across all providers:

| Setting | Type | Required | Description |
|---------|------|----------|-------------|
| `service` | String | Yes | Provider class name (OpenAI, Anthropic, OpenRouter, Ollama, Mock) |
| `service` | String | Yes | Provider class name (OpenAI, Anthropic, OpenRouter, Ollama, RubyLLM, Mock) |
| `access_token` / `api_key` | String | Yes* | API authentication key |
| `model` | String | Yes* | Model identifier for the LLM to use |
| `temperature` | Float | No | Randomness control (0.0-2.0, default varies by provider) |
Expand All @@ -220,6 +220,7 @@ Common settings available across all providers:
- **[Ollama Provider](/providers/ollama)** - Host configuration for local instances
- **[OpenAI Provider](/providers/open_ai)** - Organization ID, request timeout, admin token, etc.
- **[OpenRouter Provider](/providers/open_router)** - App name, site URL, provider preferences, etc.
- **[RubyLLM Provider](/providers/ruby_llm)** - Unified API for 15+ providers via RubyLLM
- **[Mock Provider](/providers/mock)** - Testing-specific options

### Using Configured Providers
Expand Down
8 changes: 6 additions & 2 deletions docs/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Build AI agents with Rails in minutes. This guide covers installation, configura

- Ruby 3.0+
- Rails 7.0+
- API key for your chosen provider (OpenAI, Anthropic, or Ollama)
- API key for your chosen provider (OpenAI, Anthropic, Ollama, or RubyLLM)

## Installation

Expand Down Expand Up @@ -40,6 +40,10 @@ bundle add openai # Ollama uses OpenAI-compatible API
bundle add openai # OpenRouter uses OpenAI-compatible API
```

```bash [RubyLLM]
bundle add ruby_llm # Unified API for 15+ providers
```

:::

Run the install generator:
Expand Down Expand Up @@ -202,7 +206,7 @@ See **[Generation](/agents/generation)** for background jobs, callbacks, and res
- **[Tools](/actions/tools)** - Function calling and MCP integration
- **[Structured Output](/actions/structured_output)** - Parse JSON with schemas
- **[Embeddings](/actions/embeddings)** - Vector generation for semantic search
- **[Providers](/providers)** - OpenAI, Anthropic, Ollama, OpenRouter
- **[Providers](/providers)** - OpenAI, Anthropic, Ollama, OpenRouter, RubyLLM

**Framework:**
- **[Configuration](/framework/configuration)** - Environment settings, precedence
Expand Down
16 changes: 16 additions & 0 deletions docs/providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,15 @@ Providers connect your agents to AI services through a unified interface. Switch

<<< @/../test/dummy/app/agents/providers/mock_agent.rb#agent{ruby} [Mock]

```ruby [RubyLLM]
class RubyLLMAgent < ApplicationAgent
generate_with :ruby_llm, model: "gpt-4o-mini"
# Works with any model RubyLLM supports:
# generate_with :ruby_llm, model: "claude-sonnet-4-5-20250929"
# generate_with :ruby_llm, model: "gemini-2.0-flash"
end
```

:::

## Choosing a Provider
Expand Down Expand Up @@ -52,6 +61,13 @@ Access 200+ models from OpenAI, Anthropic, Google, Meta, and more through one AP

**Choose when:** You want to compare models, need fallback options, or want flexible provider switching. Good for reducing vendor lock-in.

### [RubyLLM](/providers/ruby_llm)
**Best for:** Multi-provider flexibility through a single unified gem

Access 15+ LLM providers (OpenAI, Anthropic, Gemini, Bedrock, Azure, Ollama, and more) through RubyLLM's unified API. Switch models by changing a single parameter.

**Choose when:** You want a single gem to access multiple providers, prefer RubyLLM's configuration model, or want to switch between providers without changing provider configuration.

### [Mock](/providers/mock)
**Best for:** Testing, development, offline work

Expand Down
178 changes: 178 additions & 0 deletions docs/providers/ruby_llm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,178 @@
---
title: RubyLLM Provider
description: Unified access to 15+ LLM providers through the RubyLLM gem. Use OpenAI, Anthropic, Gemini, Bedrock, Azure, Ollama, and more with a single provider configuration.
---
# {{ $frontmatter.title }}

The RubyLLM provider gives your agents access to 15+ LLM providers through [RubyLLM](https://rubyllm.com)'s unified API. Switch between OpenAI, Anthropic, Gemini, Bedrock, Azure, Ollama, and more by changing the model parameter.

## Configuration

### Basic Setup

Configure RubyLLM in your agent:

```ruby
class MyAgent < ApplicationAgent
generate_with :ruby_llm, model: "gpt-4o-mini"
end
```

### RubyLLM API Keys

RubyLLM manages its own API keys. Configure them in an initializer:

```ruby
# config/initializers/ruby_llm.rb
RubyLLM.configure do |config|
config.openai_api_key = Rails.application.credentials.dig(:openai, :api_key)
config.anthropic_api_key = Rails.application.credentials.dig(:anthropic, :api_key)
config.gemini_api_key = Rails.application.credentials.dig(:gemini, :api_key)
# Add keys for any providers you want to use
end
```

### Configuration File

Set up RubyLLM in `config/active_agent.yml`:

```yaml
ruby_llm: &ruby_llm
service: "RubyLLM"

development:
ruby_llm:
<<: *ruby_llm

production:
ruby_llm:
<<: *ruby_llm
```

## Supported Models

RubyLLM automatically resolves which provider to use based on the model ID. Any model supported by RubyLLM works with this provider. For the complete list, see [RubyLLM's documentation](https://rubyllm.com).

### Examples by Provider

| Provider | Example Models |
|----------|---------------|
| **OpenAI** | `gpt-4o`, `gpt-4o-mini`, `gpt-4.1` |
| **Anthropic** | `claude-sonnet-4-5-20250929`, `claude-haiku-4-5` |
| **Google Gemini** | `gemini-2.0-flash`, `gemini-1.5-pro` |
| **AWS Bedrock** | Bedrock-hosted models |
| **Azure OpenAI** | Azure-hosted OpenAI models |
| **Ollama** | `llama3`, `mistral`, locally-hosted models |

Switch providers by changing the model:

```ruby
class FlexibleAgent < ApplicationAgent
# Any of these work with the same provider config:
generate_with :ruby_llm, model: "gpt-4o-mini"
# generate_with :ruby_llm, model: "claude-sonnet-4-5-20250929"
# generate_with :ruby_llm, model: "gemini-2.0-flash"
end
```

## Provider-Specific Parameters

### Required Parameters

- **`model`** - Model identifier (e.g., "gpt-4o-mini", "claude-sonnet-4-5-20250929")

### Sampling Parameters

- **`temperature`** - Controls randomness (0.0 to 1.0)
- **`max_tokens`** - Maximum number of tokens to generate (passed via RubyLLM's `params:` merge)

### Client Configuration

Configure timeouts and other settings through RubyLLM directly:

```ruby
RubyLLM.configure do |config|
config.request_timeout = 120
end
```

## Tool Calling

RubyLLM supports tool/function calling for models that support it. Use the standard ActiveAgent tool format:

```ruby
class WeatherAgent < ApplicationAgent
generate_with :ruby_llm, model: "gpt-4o-mini"

def forecast
prompt(
message: "What's the weather in Boston?",
tools: [{
name: "get_weather",
description: "Get weather for a location",
parameters: {
type: "object",
properties: {
location: { type: "string", description: "City name" }
},
required: ["location"]
}
}]
)
end

def get_weather(location:)
WeatherService.fetch(location)
end
end
```

## Embeddings

Generate embeddings through RubyLLM's unified embedding API:

```ruby
class SearchAgent < ApplicationAgent
generate_with :ruby_llm, model: "gpt-4o-mini"
embed_with :ruby_llm, model: "text-embedding-3-small"

def index_document
embed(input: "Document text to embed")
end
end
```

## Streaming

Streaming is supported for models that support it:

```ruby
class StreamingAgent < ApplicationAgent
generate_with :ruby_llm, model: "gpt-4o-mini", stream: true
end
```

See [Streaming](/agents/streaming) for ActionCable integration and real-time updates.

## When to Use RubyLLM vs Direct Providers

**Use RubyLLM when:**
- You want to switch between providers without changing configuration
- You prefer RubyLLM's key management via `RubyLLM.configure`
- You want access to providers that ActiveAgent doesn't have a dedicated implementation for (e.g., Gemini, Bedrock)
- You want a single gem dependency for multi-provider support

**Use a direct provider (OpenAI, Anthropic) when:**
- You need provider-specific features (MCP servers, extended thinking, JSON schema mode)
- You want the tightest integration with a provider's gem SDK
- You need provider-specific error handling classes

## Related Documentation

- [Providers Overview](/providers) - Compare all available providers
- [Getting Started](/getting_started) - Complete setup guide
- [Configuration](/framework/configuration) - Environment-specific settings
- [Tools](/actions/tools) - Function calling
- [Embeddings](/actions/embeddings) - Vector generation
- [Streaming](/agents/streaming) - Real-time response updates
- [RubyLLM Documentation](https://rubyllm.com) - Official RubyLLM docs
3 changes: 2 additions & 1 deletion lib/active_agent/concerns/provider.rb
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,8 @@ module Provider
"Openrouter" => "OpenRouter",
"Openai" => "OpenAI",
"AzureOpenai" => "AzureOpenAI",
"Azureopenai" => "AzureOpenAI"
"Azureopenai" => "AzureOpenAI",
"Rubyllm" => "RubyLLM"
}

included do
Expand Down
3 changes: 2 additions & 1 deletion lib/active_agent/providers/_base_provider.rb
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,8 @@
# @private
GEM_LOADERS = {
anthropic: [ "anthropic", "~> 1.12", "anthropic" ],
openai: [ "openai", "~> 0.34", "openai" ]
openai: [ "openai", "~> 0.34", "openai" ],
ruby_llm: [ "ruby_llm", ">= 1.0", "ruby_llm" ]
}

# Requires a provider's gem dependency.
Expand Down
Loading