diff --git a/AGENTS.md b/AGENTS.md index abb0f211..589ac2ae 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -224,6 +224,12 @@ bin/test test/integration/open_ai/ - Access 200+ models through single API - Provider preferences via `provider: { order: [...] }` +### RubyLLM +- Uses `ruby_llm` gem for unified access to 15+ providers +- RubyLLM manages its own API keys via `RubyLLM.configure` +- Model ID determines which provider is used automatically +- Supports prompts, embeddings, tool calling, and streaming + ## Common Gotchas 1. **Generation is lazy** - Nothing happens until `generate_now` or `prompt_later` @@ -253,7 +259,7 @@ bin/rubocop - Ruby 3.1+ - Rails 7.2+ / 8.0+ / 8.1+ -- Provider gems (optional): `openai`, `anthropic` +- Provider gems (optional): `openai`, `anthropic`, `ruby_llm` ## Links diff --git a/README.md b/README.md index 3b73004a..78bafbc9 100644 --- a/README.md +++ b/README.md @@ -39,6 +39,9 @@ bundle add openai # OpenRouter (uses OpenAI-compatible API) bundle add openai + +# RubyLLM (unified API for 15+ providers) +bundle add ruby_llm ``` ### Setup @@ -119,12 +122,15 @@ development: ollama: service: "Ollama" model: "llama3.2" + + ruby_llm: + service: "RubyLLM" ``` ## Features - **Agent-Oriented Programming**: Build AI applications using familiar Rails patterns -- **Multiple Provider Support**: Works with OpenAI, Anthropic, Ollama, and more +- **Multiple Provider Support**: Works with OpenAI, Anthropic, Ollama, RubyLLM, and more - **Action-Based Design**: Define agent capabilities through actions - **View Templates**: Use ERB templates for prompts (text, JSON, HTML) - **Streaming Support**: Real-time response streaming with ActionCable diff --git a/Rakefile b/Rakefile index f9b23851..58009e57 100644 --- a/Rakefile +++ b/Rakefile @@ -4,7 +4,9 @@ require "rake/testtask" Rake::TestTask.new(:test) do |t| t.libs << "test" - t.pattern = "test/**/*_test.rb" + t.test_files = FileList["test/**/*_test.rb"] + .exclude("test/**/integration_test.rb") + .exclude("test/dummy/tmp/**/*") t.verbose = true end diff --git a/activeagent.gemspec b/activeagent.gemspec index 7bb4fbbf..0abd70f6 100644 --- a/activeagent.gemspec +++ b/activeagent.gemspec @@ -29,6 +29,7 @@ Gem::Specification.new do |spec| spec.add_development_dependency "anthropic", "~> 1.12" spec.add_development_dependency "openai", "~> 0.34" + spec.add_development_dependency "ruby_llm", ">= 1.0" spec.add_development_dependency "capybara", "~> 3.40" spec.add_development_dependency "cuprite", "~> 0.15" diff --git a/docs/actions/embeddings.md b/docs/actions/embeddings.md index 84ad0d99..2cd1b3f6 100644 --- a/docs/actions/embeddings.md +++ b/docs/actions/embeddings.md @@ -84,3 +84,4 @@ Different models produce different embedding dimensions: - [Testing](/framework/testing) - Test embedding functionality - [OpenAI Provider](/providers/open_ai) - OpenAI embedding models - [Ollama Provider](/providers/ollama) - Local embedding generation +- [RubyLLM Provider](/providers/ruby_llm) - Embeddings via RubyLLM's unified API diff --git a/docs/actions/mcps.md b/docs/actions/mcps.md index 0ccff44e..71acbdc9 100644 --- a/docs/actions/mcps.md +++ b/docs/actions/mcps.md @@ -18,6 +18,7 @@ Connect agents to external services via [Model Context Protocol](https://modelco | **Anthropic** | ⚠️ | Beta | | **OpenRouter** | 🚧 | In development | | **Ollama** | ❌ | Not supported | +| **RubyLLM** | ❌ | Not supported (use provider-specific integration) | | **Mock** | ❌ | Not supported | ## MCP Format diff --git a/docs/actions/structured_output.md b/docs/actions/structured_output.md index b9b8f7a1..eaf6de92 100644 --- a/docs/actions/structured_output.md +++ b/docs/actions/structured_output.md @@ -23,6 +23,7 @@ Two JSON response formats: | **Anthropic** | 🟦 | ❌ | Emulated via prompt engineering technique | | **OpenRouter** | 🟩 | 🟩 | Native support, depends on underlying model | | **Ollama** | 🟨 | 🟨 | Model-dependent, support varies by model | +| **RubyLLM** | 🟨 | 🟨 | Depends on underlying provider/model | | **Mock** | 🟩 | 🟩 | Accepted but not validated or enforced | ## JSON Object Mode diff --git a/docs/actions/tools.md b/docs/actions/tools.md index 61f51435..43c8281e 100644 --- a/docs/actions/tools.md +++ b/docs/actions/tools.md @@ -23,6 +23,7 @@ The LLM calls `get_weather` automatically when it needs weather data, and uses t | **Anthropic** | 🟩 | 🟩 | Full support for built-in tools | | **OpenRouter** | 🟩 | ❌ | Model-dependent capabilities | | **Ollama** | 🟩 | ❌ | Model-dependent capabilities | +| **RubyLLM** | 🟩 | ❌ | Depends on underlying provider/model | | **Mock** | 🟦 | ❌ | Accepted but not enforced | For **MCP (Model Context Protocol)** support, see the [MCP documentation](/actions/mcps). diff --git a/docs/framework.md b/docs/framework.md index 465472ca..bc559d16 100644 --- a/docs/framework.md +++ b/docs/framework.md @@ -113,7 +113,7 @@ Actions call `prompt()` or `embed()` to configure requests. Callbacks manage con ActiveAgent integrates with Rails features and AI capabilities: -- **[Providers](/providers)** - Swap AI services (OpenAI, Anthropic, Ollama, OpenRouter) +- **[Providers](/providers)** - Swap AI services (OpenAI, Anthropic, Ollama, OpenRouter, RubyLLM) - **[Instructions](/agents/instructions)** - System prompts from templates or strings - **[Callbacks](/agents/callbacks)** - Lifecycle hooks for context and logging - **[Tools](/actions/tools)** - Agent methods as AI-callable functions @@ -133,7 +133,7 @@ ActiveAgent integrates with Rails features and AI capabilities: - [Generation](/agents/generation) - Synchronous and asynchronous execution - [Instructions](/agents/instructions) - System prompts and behavior guidance - [Messages](/actions/messages) - Conversation context with multimodal support -- [Providers](/providers) - OpenAI, Anthropic, Ollama, OpenRouter configuration +- [Providers](/providers) - OpenAI, Anthropic, Ollama, OpenRouter, RubyLLM configuration **Advanced:** - [Tools](/actions/tools) - AI-callable Ruby methods and MCP integration diff --git a/docs/framework/configuration.md b/docs/framework/configuration.md index 9e0fb765..fb3ce83b 100644 --- a/docs/framework/configuration.md +++ b/docs/framework/configuration.md @@ -206,7 +206,7 @@ Common settings available across all providers: | Setting | Type | Required | Description | |---------|------|----------|-------------| -| `service` | String | Yes | Provider class name (OpenAI, Anthropic, OpenRouter, Ollama, Mock) | +| `service` | String | Yes | Provider class name (OpenAI, Anthropic, OpenRouter, Ollama, RubyLLM, Mock) | | `access_token` / `api_key` | String | Yes* | API authentication key | | `model` | String | Yes* | Model identifier for the LLM to use | | `temperature` | Float | No | Randomness control (0.0-2.0, default varies by provider) | @@ -220,6 +220,7 @@ Common settings available across all providers: - **[Ollama Provider](/providers/ollama)** - Host configuration for local instances - **[OpenAI Provider](/providers/open_ai)** - Organization ID, request timeout, admin token, etc. - **[OpenRouter Provider](/providers/open_router)** - App name, site URL, provider preferences, etc. +- **[RubyLLM Provider](/providers/ruby_llm)** - Unified API for 15+ providers via RubyLLM - **[Mock Provider](/providers/mock)** - Testing-specific options ### Using Configured Providers diff --git a/docs/getting_started.md b/docs/getting_started.md index e5614cf5..9c3d5ef1 100644 --- a/docs/getting_started.md +++ b/docs/getting_started.md @@ -10,7 +10,7 @@ Build AI agents with Rails in minutes. This guide covers installation, configura - Ruby 3.0+ - Rails 7.0+ -- API key for your chosen provider (OpenAI, Anthropic, or Ollama) +- API key for your chosen provider (OpenAI, Anthropic, Ollama, or RubyLLM) ## Installation @@ -40,6 +40,10 @@ bundle add openai # Ollama uses OpenAI-compatible API bundle add openai # OpenRouter uses OpenAI-compatible API ``` +```bash [RubyLLM] +bundle add ruby_llm # Unified API for 15+ providers +``` + ::: Run the install generator: @@ -202,7 +206,7 @@ See **[Generation](/agents/generation)** for background jobs, callbacks, and res - **[Tools](/actions/tools)** - Function calling and MCP integration - **[Structured Output](/actions/structured_output)** - Parse JSON with schemas - **[Embeddings](/actions/embeddings)** - Vector generation for semantic search -- **[Providers](/providers)** - OpenAI, Anthropic, Ollama, OpenRouter +- **[Providers](/providers)** - OpenAI, Anthropic, Ollama, OpenRouter, RubyLLM **Framework:** - **[Configuration](/framework/configuration)** - Environment settings, precedence diff --git a/docs/providers.md b/docs/providers.md index 9cfe45b7..add6f66a 100644 --- a/docs/providers.md +++ b/docs/providers.md @@ -20,6 +20,15 @@ Providers connect your agents to AI services through a unified interface. Switch <<< @/../test/dummy/app/agents/providers/mock_agent.rb#agent{ruby} [Mock] +```ruby [RubyLLM] +class RubyLLMAgent < ApplicationAgent + generate_with :ruby_llm, model: "gpt-4o-mini" + # Works with any model RubyLLM supports: + # generate_with :ruby_llm, model: "claude-sonnet-4-5-20250929" + # generate_with :ruby_llm, model: "gemini-2.0-flash" +end +``` + ::: ## Choosing a Provider @@ -52,6 +61,13 @@ Access 200+ models from OpenAI, Anthropic, Google, Meta, and more through one AP **Choose when:** You want to compare models, need fallback options, or want flexible provider switching. Good for reducing vendor lock-in. +### [RubyLLM](/providers/ruby_llm) +**Best for:** Multi-provider flexibility through a single unified gem + +Access 15+ LLM providers (OpenAI, Anthropic, Gemini, Bedrock, Azure, Ollama, and more) through RubyLLM's unified API. Switch models by changing a single parameter. + +**Choose when:** You want a single gem to access multiple providers, prefer RubyLLM's configuration model, or want to switch between providers without changing provider configuration. + ### [Mock](/providers/mock) **Best for:** Testing, development, offline work diff --git a/docs/providers/ruby_llm.md b/docs/providers/ruby_llm.md new file mode 100644 index 00000000..ff3f7c16 --- /dev/null +++ b/docs/providers/ruby_llm.md @@ -0,0 +1,178 @@ +--- +title: RubyLLM Provider +description: Unified access to 15+ LLM providers through the RubyLLM gem. Use OpenAI, Anthropic, Gemini, Bedrock, Azure, Ollama, and more with a single provider configuration. +--- +# {{ $frontmatter.title }} + +The RubyLLM provider gives your agents access to 15+ LLM providers through [RubyLLM](https://rubyllm.com)'s unified API. Switch between OpenAI, Anthropic, Gemini, Bedrock, Azure, Ollama, and more by changing the model parameter. + +## Configuration + +### Basic Setup + +Configure RubyLLM in your agent: + +```ruby +class MyAgent < ApplicationAgent + generate_with :ruby_llm, model: "gpt-4o-mini" +end +``` + +### RubyLLM API Keys + +RubyLLM manages its own API keys. Configure them in an initializer: + +```ruby +# config/initializers/ruby_llm.rb +RubyLLM.configure do |config| + config.openai_api_key = Rails.application.credentials.dig(:openai, :api_key) + config.anthropic_api_key = Rails.application.credentials.dig(:anthropic, :api_key) + config.gemini_api_key = Rails.application.credentials.dig(:gemini, :api_key) + # Add keys for any providers you want to use +end +``` + +### Configuration File + +Set up RubyLLM in `config/active_agent.yml`: + +```yaml +ruby_llm: &ruby_llm + service: "RubyLLM" + +development: + ruby_llm: + <<: *ruby_llm + +production: + ruby_llm: + <<: *ruby_llm +``` + +## Supported Models + +RubyLLM automatically resolves which provider to use based on the model ID. Any model supported by RubyLLM works with this provider. For the complete list, see [RubyLLM's documentation](https://rubyllm.com). + +### Examples by Provider + +| Provider | Example Models | +|----------|---------------| +| **OpenAI** | `gpt-4o`, `gpt-4o-mini`, `gpt-4.1` | +| **Anthropic** | `claude-sonnet-4-5-20250929`, `claude-haiku-4-5` | +| **Google Gemini** | `gemini-2.0-flash`, `gemini-1.5-pro` | +| **AWS Bedrock** | Bedrock-hosted models | +| **Azure OpenAI** | Azure-hosted OpenAI models | +| **Ollama** | `llama3`, `mistral`, locally-hosted models | + +Switch providers by changing the model: + +```ruby +class FlexibleAgent < ApplicationAgent + # Any of these work with the same provider config: + generate_with :ruby_llm, model: "gpt-4o-mini" + # generate_with :ruby_llm, model: "claude-sonnet-4-5-20250929" + # generate_with :ruby_llm, model: "gemini-2.0-flash" +end +``` + +## Provider-Specific Parameters + +### Required Parameters + +- **`model`** - Model identifier (e.g., "gpt-4o-mini", "claude-sonnet-4-5-20250929") + +### Sampling Parameters + +- **`temperature`** - Controls randomness (0.0 to 1.0) +- **`max_tokens`** - Maximum number of tokens to generate (passed via RubyLLM's `params:` merge) + +### Client Configuration + +Configure timeouts and other settings through RubyLLM directly: + +```ruby +RubyLLM.configure do |config| + config.request_timeout = 120 +end +``` + +## Tool Calling + +RubyLLM supports tool/function calling for models that support it. Use the standard ActiveAgent tool format: + +```ruby +class WeatherAgent < ApplicationAgent + generate_with :ruby_llm, model: "gpt-4o-mini" + + def forecast + prompt( + message: "What's the weather in Boston?", + tools: [{ + name: "get_weather", + description: "Get weather for a location", + parameters: { + type: "object", + properties: { + location: { type: "string", description: "City name" } + }, + required: ["location"] + } + }] + ) + end + + def get_weather(location:) + WeatherService.fetch(location) + end +end +``` + +## Embeddings + +Generate embeddings through RubyLLM's unified embedding API: + +```ruby +class SearchAgent < ApplicationAgent + generate_with :ruby_llm, model: "gpt-4o-mini" + embed_with :ruby_llm, model: "text-embedding-3-small" + + def index_document + embed(input: "Document text to embed") + end +end +``` + +## Streaming + +Streaming is supported for models that support it: + +```ruby +class StreamingAgent < ApplicationAgent + generate_with :ruby_llm, model: "gpt-4o-mini", stream: true +end +``` + +See [Streaming](/agents/streaming) for ActionCable integration and real-time updates. + +## When to Use RubyLLM vs Direct Providers + +**Use RubyLLM when:** +- You want to switch between providers without changing configuration +- You prefer RubyLLM's key management via `RubyLLM.configure` +- You want access to providers that ActiveAgent doesn't have a dedicated implementation for (e.g., Gemini, Bedrock) +- You want a single gem dependency for multi-provider support + +**Use a direct provider (OpenAI, Anthropic) when:** +- You need provider-specific features (MCP servers, extended thinking, JSON schema mode) +- You want the tightest integration with a provider's gem SDK +- You need provider-specific error handling classes + +## Related Documentation + +- [Providers Overview](/providers) - Compare all available providers +- [Getting Started](/getting_started) - Complete setup guide +- [Configuration](/framework/configuration) - Environment-specific settings +- [Tools](/actions/tools) - Function calling +- [Embeddings](/actions/embeddings) - Vector generation +- [Streaming](/agents/streaming) - Real-time response updates +- [RubyLLM Documentation](https://rubyllm.com) - Official RubyLLM docs diff --git a/lib/active_agent/concerns/provider.rb b/lib/active_agent/concerns/provider.rb index 1ef7abd9..df946344 100644 --- a/lib/active_agent/concerns/provider.rb +++ b/lib/active_agent/concerns/provider.rb @@ -12,7 +12,9 @@ module Provider "Openrouter" => "OpenRouter", "Openai" => "OpenAI", "AzureOpenai" => "AzureOpenAI", - "Azureopenai" => "AzureOpenAI" + "Azureopenai" => "AzureOpenAI", + "Rubyllm" => "RubyLLM", + "RubyLlm" => "RubyLLM" } included do diff --git a/lib/active_agent/providers/_base_provider.rb b/lib/active_agent/providers/_base_provider.rb index 63931692..f2323fbd 100644 --- a/lib/active_agent/providers/_base_provider.rb +++ b/lib/active_agent/providers/_base_provider.rb @@ -9,7 +9,8 @@ # @private GEM_LOADERS = { anthropic: [ "anthropic", "~> 1.12", "anthropic" ], - openai: [ "openai", "~> 0.34", "openai" ] + openai: [ "openai", "~> 0.34", "openai" ], + ruby_llm: [ "ruby_llm", ">= 1.0", "ruby_llm" ] } # Requires a provider's gem dependency. diff --git a/lib/active_agent/providers/ruby_llm/_types.rb b/lib/active_agent/providers/ruby_llm/_types.rb new file mode 100644 index 00000000..61b25a86 --- /dev/null +++ b/lib/active_agent/providers/ruby_llm/_types.rb @@ -0,0 +1,77 @@ +# frozen_string_literal: true + +require_relative "options" +require_relative "request" +require_relative "embedding_request" + +module ActiveAgent + module Providers + module RubyLLM + # Type for Request model + class RequestType < ActiveModel::Type::Value + def cast(value) + case value + when Request + value + when Hash + Request.new(**value.deep_symbolize_keys) + when nil + nil + else + raise ArgumentError, "Cannot cast #{value.class} to Request" + end + end + + def serialize(value) + case value + when Request + value.serialize + when Hash + value + when nil + nil + else + raise ArgumentError, "Cannot serialize #{value.class}" + end + end + + def deserialize(value) + cast(value) + end + end + + # Type for embedding requests + class EmbeddingRequestType < ActiveModel::Type::Value + def cast(value) + case value + when RubyLLM::EmbeddingRequest + value + when Hash + RubyLLM::EmbeddingRequest.new(**value.deep_symbolize_keys) + when nil + nil + else + raise ArgumentError, "Cannot cast #{value.class} to EmbeddingRequest" + end + end + + def serialize(value) + case value + when RubyLLM::EmbeddingRequest + value.serialize + when Hash + value + when nil + nil + else + raise ArgumentError, "Cannot serialize #{value.class}" + end + end + + def deserialize(value) + cast(value) + end + end + end + end +end diff --git a/lib/active_agent/providers/ruby_llm/embedding_request.rb b/lib/active_agent/providers/ruby_llm/embedding_request.rb new file mode 100644 index 00000000..4b69c48a --- /dev/null +++ b/lib/active_agent/providers/ruby_llm/embedding_request.rb @@ -0,0 +1,16 @@ +# frozen_string_literal: true + +require "active_agent/providers/common/model" + +module ActiveAgent + module Providers + module RubyLLM + # Embedding request model for RubyLLM provider. + class EmbeddingRequest < Common::BaseModel + attribute :model, :string + attribute :input + attribute :dimensions, :integer + end + end + end +end diff --git a/lib/active_agent/providers/ruby_llm/messages/_types.rb b/lib/active_agent/providers/ruby_llm/messages/_types.rb new file mode 100644 index 00000000..21043a80 --- /dev/null +++ b/lib/active_agent/providers/ruby_llm/messages/_types.rb @@ -0,0 +1,109 @@ +# frozen_string_literal: true + +# Load all message classes +require_relative "base" +require_relative "system" +require_relative "user" +require_relative "assistant" +require_relative "tool" + +module ActiveAgent + module Providers + module RubyLLM + module Messages + # Type for Messages array + class MessagesType < ActiveModel::Type::Value + def initialize + super + @message_type = MessageType.new + end + + def cast(value) + case value + when Array + value.map { |v| @message_type.cast(v) } + when nil + nil + else + raise ArgumentError, "Cannot cast #{value.class} to Messages array" + end + end + + def serialize(value) + case value + when Array + grouped = [] + + value.each do |message| + if grouped.empty? || grouped.last.role != message.role + grouped << message.deep_dup + else + grouped.last.content += message.content.deep_dup + end + end + + grouped.map { |v| @message_type.serialize(v) } + when nil + nil + else + raise ArgumentError, "Cannot serialize #{value.class}" + end + end + + def deserialize(value) + cast(value) + end + end + + # Type for individual Message + class MessageType < ActiveModel::Type::Value + def cast(value) + case value + when Base + value + when String + User.new(content: value) + when Hash + hash = value.deep_symbolize_keys + role = hash[:role]&.to_sym + + case role + when :user, nil + User.new(**hash) + when :assistant + Assistant.new(**hash) + when :system + System.new(**hash) + when :tool + Tool.new(**hash) + else + raise ArgumentError, "Unknown message role: #{role}" + end + when nil + nil + else + raise ArgumentError, "Cannot cast #{value.class} to Message" + end + end + + def serialize(value) + case value + when Base + value.serialize + when Hash + value + when nil + nil + else + raise ArgumentError, "Cannot serialize #{value.class}" + end + end + + def deserialize(value) + cast(value) + end + end + end + end + end +end diff --git a/lib/active_agent/providers/ruby_llm/messages/assistant.rb b/lib/active_agent/providers/ruby_llm/messages/assistant.rb new file mode 100644 index 00000000..07d5296b --- /dev/null +++ b/lib/active_agent/providers/ruby_llm/messages/assistant.rb @@ -0,0 +1,27 @@ +# frozen_string_literal: true + +require_relative "base" + +module ActiveAgent + module Providers + module RubyLLM + module Messages + # Assistant message for RubyLLM provider. + # + # Drops extra fields that are part of the API response but not + # part of the message structure. + class Assistant < Base + attribute :role, :string, as: "assistant" + attribute :content + attribute :tool_calls + + validates :content, presence: true, unless: :tool_calls + + # Drop API response fields that aren't part of the message + drop_attributes :usage, :id, :model, :stop_reason, :stop_sequence, :type, + :input_tokens, :output_tokens + end + end + end + end +end diff --git a/lib/active_agent/providers/ruby_llm/messages/base.rb b/lib/active_agent/providers/ruby_llm/messages/base.rb new file mode 100644 index 00000000..908e658f --- /dev/null +++ b/lib/active_agent/providers/ruby_llm/messages/base.rb @@ -0,0 +1,48 @@ +# frozen_string_literal: true + +require "active_agent/providers/common/model" + +module ActiveAgent + module Providers + module RubyLLM + module Messages + # Base class for RubyLLM messages. + class Base < Common::BaseModel + attribute :role, :string + attribute :content + + validates :role, presence: true + + # Converts to common format. + # + # @return [Hash] message in canonical format with role and text content + def to_common + { + role: role, + content: extract_text_content, + name: nil + } + end + + private + + # Extracts text content from the content structure. + # + # @return [String] extracted text content + def extract_text_content + case content + when String + content + when Array + content.select { |block| block.is_a?(Hash) && block[:type] == "text" } + .map { |block| block[:text] } + .join("\n") + else + content.to_s + end + end + end + end + end + end +end diff --git a/lib/active_agent/providers/ruby_llm/messages/system.rb b/lib/active_agent/providers/ruby_llm/messages/system.rb new file mode 100644 index 00000000..4038a3fe --- /dev/null +++ b/lib/active_agent/providers/ruby_llm/messages/system.rb @@ -0,0 +1,18 @@ +# frozen_string_literal: true + +require_relative "base" + +module ActiveAgent + module Providers + module RubyLLM + module Messages + # System message for RubyLLM provider. + class System < Base + attribute :role, :string, as: "system" + + validates :content, presence: true + end + end + end + end +end diff --git a/lib/active_agent/providers/ruby_llm/messages/tool.rb b/lib/active_agent/providers/ruby_llm/messages/tool.rb new file mode 100644 index 00000000..f53b913e --- /dev/null +++ b/lib/active_agent/providers/ruby_llm/messages/tool.rb @@ -0,0 +1,24 @@ +# frozen_string_literal: true + +require_relative "base" + +module ActiveAgent + module Providers + module RubyLLM + module Messages + # Tool result message for RubyLLM provider. + class Tool < Base + attribute :role, :string, as: "tool" + attribute :content + attribute :tool_call_id, :string + + def to_common + common = super + common[:tool_call_id] = tool_call_id if tool_call_id + common + end + end + end + end + end +end diff --git a/lib/active_agent/providers/ruby_llm/messages/user.rb b/lib/active_agent/providers/ruby_llm/messages/user.rb new file mode 100644 index 00000000..adec3d9d --- /dev/null +++ b/lib/active_agent/providers/ruby_llm/messages/user.rb @@ -0,0 +1,18 @@ +# frozen_string_literal: true + +require_relative "base" + +module ActiveAgent + module Providers + module RubyLLM + module Messages + # User message for RubyLLM provider. + class User < Base + attribute :role, :string, as: "user" + + validates :content, presence: true + end + end + end + end +end diff --git a/lib/active_agent/providers/ruby_llm/options.rb b/lib/active_agent/providers/ruby_llm/options.rb new file mode 100644 index 00000000..7ca1ca95 --- /dev/null +++ b/lib/active_agent/providers/ruby_llm/options.rb @@ -0,0 +1,28 @@ +# frozen_string_literal: true + +require "active_agent/providers/common/model" + +module ActiveAgent + module Providers + module RubyLLM + # Configuration options for the RubyLLM provider. + # + # RubyLLM manages its own API keys via RubyLLM.configure, so no + # provider-specific API key attributes are needed here. + class Options < Common::BaseModel + attribute :model, :string + attribute :temperature, :float + attribute :max_tokens, :integer + + def initialize(kwargs = {}) + kwargs = kwargs.deep_symbolize_keys if kwargs.respond_to?(:deep_symbolize_keys) + super(**deep_compact(kwargs)) + end + + def extra_headers + {} + end + end + end + end +end diff --git a/lib/active_agent/providers/ruby_llm/request.rb b/lib/active_agent/providers/ruby_llm/request.rb new file mode 100644 index 00000000..2e455580 --- /dev/null +++ b/lib/active_agent/providers/ruby_llm/request.rb @@ -0,0 +1,30 @@ +# frozen_string_literal: true + +require "active_agent/providers/common/model" + +require_relative "messages/_types" + +module ActiveAgent + module Providers + module RubyLLM + # Request model for RubyLLM provider. + class Request < Common::BaseModel + attribute :model, :string + attribute :messages, Messages::MessagesType.new + attribute :instructions + attribute :tools + attribute :tool_choice + attribute :temperature, :float + attribute :max_tokens, :integer + attribute :stream, :boolean, default: false + attribute :response_format + + # Common Format Compatibility + def message=(value) + self.messages ||= [] + self.messages << value + end + end + end + end +end diff --git a/lib/active_agent/providers/ruby_llm/tool_proxy.rb b/lib/active_agent/providers/ruby_llm/tool_proxy.rb new file mode 100644 index 00000000..90e51ed8 --- /dev/null +++ b/lib/active_agent/providers/ruby_llm/tool_proxy.rb @@ -0,0 +1,45 @@ +# frozen_string_literal: true + +module ActiveAgent + module Providers + module RubyLLM + # Bridges ActiveAgent tool definitions to RubyLLM's expected tool interface. + # RubyLLM expects tools as { "name" => tool } where each tool responds to + # #name, #description, #parameters, #params_schema, and #provider_params. + class ToolProxy + attr_reader :name, :description, :parameters + + def initialize(name:, description:, parameters:) + @name = name + @description = description + @parameters = parameters + end + + # RubyLLM checks this first; returns the JSON Schema directly so + # RubyLLM doesn't try to interpret our parameters as Parameter objects. + # Deep-stringifies keys to match RubyLLM's internal schema format. + def params_schema + deep_stringify(@parameters) if @parameters.is_a?(Hash) && @parameters.any? + end + + # RubyLLM merges this into the tool definition + def provider_params + {} + end + + private + + def deep_stringify(obj) + case obj + when Hash + obj.each_with_object({}) { |(k, v), h| h[k.to_s] = deep_stringify(v) } + when Array + obj.map { |v| deep_stringify(v) } + else + obj + end + end + end + end + end +end diff --git a/lib/active_agent/providers/ruby_llm_provider.rb b/lib/active_agent/providers/ruby_llm_provider.rb new file mode 100644 index 00000000..7e647aa2 --- /dev/null +++ b/lib/active_agent/providers/ruby_llm_provider.rb @@ -0,0 +1,407 @@ +# frozen_string_literal: true + +require_relative "_base_provider" + +require_gem!(:ruby_llm, __FILE__) unless defined?(::RubyLLM) + +require_relative "ruby_llm/_types" +require_relative "ruby_llm/tool_proxy" + +module ActiveAgent + module Providers + # Provider for RubyLLM's unified API, supporting 15+ LLM providers + # (OpenAI, Anthropic, Gemini, Bedrock, Azure, Ollama, etc.). + # + # Uses RubyLLM's provider-level API (provider.complete()) rather than + # the high-level Chat object to avoid conflicts with ActiveAgent's own + # conversation management and tool execution loop. + # + # @see BaseProvider + class RubyLLMProvider < BaseProvider + # @return [RubyLLM::EmbeddingRequestType] embedding request type + def self.embed_request_type + RubyLLM::EmbeddingRequestType.new + end + + protected + + # Clears tool_choice between turns to prevent infinite tool-calling loops. + def prepare_prompt_request + prepare_prompt_request_tools + super + end + + # Executes a prompt request via RubyLLM's provider-level API. + # + # Resolves the appropriate provider from the model ID, converts + # ActiveAgent messages/tools to RubyLLM format, and calls + # provider.complete(). + # + # @param parameters [Hash] serialized request parameters + # @return [Hash, nil] normalized API response hash, or nil for streaming + def api_prompt_execute(parameters) + @resolved_model_id = parameters[:model] || options.model + resolve_ruby_llm_provider!(@resolved_model_id) + + # Convert messages to RubyLLM format + messages = build_ruby_llm_messages(parameters) + + # Convert tools to RubyLLM format + tools = build_ruby_llm_tools(parameters[:tools]) + + # Build kwargs for provider.complete (tools, temperature, model are required) + kwargs = { + model: @ruby_llm_model, + tools: tools || {}, + temperature: parameters[:temperature] + } + kwargs[:schema] = parameters[:response_format] if parameters[:response_format] + + # Pass extra params (max_tokens, etc.) via RubyLLM's params: deep-merge + max_tokens = parameters[:max_tokens] || options.max_tokens + if max_tokens + kwargs[:params] = { max_tokens: max_tokens } + end + + if parameters[:stream] + stream_proc = parameters[:stream] + + # For streaming, pass a block that forwards chunks + @ruby_llm_provider.complete(messages, **kwargs) do |chunk| + stream_proc.call(chunk) + end + + nil + else + response = @ruby_llm_provider.complete(messages, **kwargs) + normalize_ruby_llm_response(response, @resolved_model_id) + end + end + + # Executes an embedding request via RubyLLM. + # + # @param parameters [Hash] serialized embedding request parameters + # @return [Hash] normalized embedding response with symbol keys + def api_embed_execute(parameters) + model_id = parameters[:model] || options.model + resolve_ruby_llm_provider!(model_id) + + input = parameters[:input] + inputs = input.is_a?(Array) ? input : [ input ] + + data = inputs.map.with_index do |text, index| + embedding = @ruby_llm_provider.embed(text, model: model_id, dimensions: parameters[:dimensions]) + + { + object: "embedding", + index: index, + embedding: embedding.vectors + } + end + + { + object: :list, + data: data, + model: model_id + } + end + + # Processes streaming chunks from RubyLLM. + # + # Handles RubyLLM::Chunk objects, building up the message in message_stack. + # + # @param chunk [RubyLLM::Chunk] streaming chunk + # @return [void] + def process_stream_chunk(chunk) + instrument("stream_chunk.active_agent") + + broadcast_stream_open + + if message_stack.empty? || !message_stack.last.is_a?(Hash) || message_stack.last[:role] != "assistant" + message_stack.push({ role: "assistant", content: "" }) + end + + message = message_stack.last + + # Append content delta + if chunk.content + message[:content] ||= "" + message[:content] += chunk.content + broadcast_stream_update(message, chunk.content) + end + + # Handle tool calls in chunk + if chunk.tool_calls&.any? + message[:tool_calls] ||= [] + chunk.tool_calls.each do |_id, tool_call| + existing = message[:tool_calls].find { |tc| tc[:id] == tool_call.id } + if existing + existing[:function][:arguments] += tool_call.arguments.to_s if tool_call.arguments + else + message[:tool_calls] << { + id: tool_call.id, + type: "function", + function: { + name: tool_call.name, + arguments: tool_call.arguments.to_s + } + } + end + end + end + + # Stream completion is handled by the base provider after + # api_prompt_execute returns nil. No action needed here. + end + + # Extracts messages from the completed API response. + # + # @param api_response [Hash, nil] normalized response hash + # @return [Array, nil] + def process_prompt_finished_extract_messages(api_response) + return nil unless api_response + [ api_response ] + end + + # Extracts tool/function calls from the last message in the stack. + # + # Converts RubyLLM's tool_calls format to ActiveAgent's expected format + # with parsed JSON arguments. + # + # @return [Array, nil] tool calls or nil + def process_prompt_finished_extract_function_calls + last_message = message_stack.last + return nil unless last_message.is_a?(Hash) + + tool_calls = last_message[:tool_calls] + return nil unless tool_calls&.any? + + tool_calls.map do |tc| + args = tc.dig(:function, :arguments) + parsed_args = if args.is_a?(String) && args.present? + JSON.parse(args, symbolize_names: true) + elsif args.is_a?(Hash) + args.deep_symbolize_keys + else + {} + end + + { + id: tc[:id], + name: tc.dig(:function, :name), + input: parsed_args + } + end + end + + # Extracts function names from tool_calls in assistant messages on the stack. + # + # @return [Array] + def extract_used_function_names + message_stack + .select { |msg| msg[:role] == "assistant" && msg[:tool_calls] } + .flat_map { |msg| msg[:tool_calls] } + .map { |tc| tc.dig(:function, :name) } + .compact + end + + # Returns true if tool_choice forces any tool to be used. + # + # Handles both string ("required") and hash ({name: "..."}) formats. + # + # @return [Boolean] + def tool_choice_forces_required? + request.tool_choice == "required" + end + + # Returns [true, name] if tool_choice forces a specific tool. + # + # @return [Array] + def tool_choice_forces_specific? + if request.tool_choice.is_a?(Hash) + [ true, request.tool_choice[:name] ] + else + [ false, nil ] + end + end + + # Executes tool calls and pushes results to message_stack. + # + # @param tool_calls [Array] with :id, :name, :input keys + # @return [void] + def process_function_calls(tool_calls) + tool_calls.each do |tool_call| + content = instrument("tool_call.active_agent", tool_name: tool_call[:name]) do + tools_function.call(tool_call[:name], **tool_call[:input]) + end + + message_stack.push({ + role: "tool", + tool_call_id: tool_call[:id], + content: content.to_json + }) + end + end + + # api_prompt_execute always returns a normalized Hash or nil (streaming), + # so no additional normalization is needed for instrumentation. + # Inherits default api_response_normalize from BaseProvider. + + private + + # Resolves and caches the RubyLLM provider for the given model. + # + # Reuses the cached provider if the model hasn't changed (e.g., during + # multi-turn tool calling loops). + # + # @param model_id [String] model identifier + # @return [void] + def resolve_ruby_llm_provider!(model_id) + return if @ruby_llm_provider && @cached_model_id == model_id + + @cached_model_id = model_id + @ruby_llm_model, @ruby_llm_provider = ::RubyLLM::Models.resolve(model_id, config: ::RubyLLM.config) + end + + # Converts ActiveAgent messages to RubyLLM message format. + # + # Prepends system instructions as the first message if present. + # + # @param parameters [Hash] request parameters + # @return [Array] RubyLLM-formatted messages + def build_ruby_llm_messages(parameters) + messages = [] + + # Add system instructions + if parameters[:instructions].present? + messages << ::RubyLLM::Message.new( + role: :system, + content: parameters[:instructions] + ) + end + + # Convert each message + (parameters[:messages] || []).each do |msg| + ruby_llm_msg = if msg[:tool_call_id] + ::RubyLLM::Message.new( + role: :tool, + content: msg[:content].to_s, + tool_call_id: msg[:tool_call_id] + ) + else + attrs = { + role: msg[:role].to_sym, + content: extract_content_text(msg[:content]) + } + attrs[:tool_calls] = convert_tool_calls_for_ruby_llm(msg[:tool_calls]) if msg[:tool_calls] + ::RubyLLM::Message.new(**attrs) + end + + messages << ruby_llm_msg + end + + messages + end + + # Extracts plain text from various content formats. + # + # @param content [String, Array, Object] message content + # @return [String] + def extract_content_text(content) + case content + when String + content + when Array + content.select { |block| block.is_a?(Hash) && block[:type] == "text" } + .map { |block| block[:text] } + .join("\n") + else + content.to_s + end + end + + # Converts ActiveAgent tool_calls to RubyLLM's ToolCall format. + # + # @param tool_calls [Array] ActiveAgent format tool calls + # @return [Hash] RubyLLM format { id => ToolCall } + def convert_tool_calls_for_ruby_llm(tool_calls) + return nil unless tool_calls + + tool_calls.each_with_object({}) do |tc, hash| + id = tc[:id] + call = ::RubyLLM::ToolCall.new( + id: id, + name: tc.dig(:function, :name) || tc[:name], + arguments: tc.dig(:function, :arguments) || tc[:input]&.to_json || "{}" + ) + hash[id] = call + end + end + + # Converts ActiveAgent tool definitions to RubyLLM ToolProxy objects. + # + # @param tools [Array, nil] ActiveAgent tool definitions + # @return [Hash, nil] { "name" => ToolProxy } + def build_ruby_llm_tools(tools) + return nil unless tools&.any? + + tools.each_with_object({}) do |tool, hash| + func = tool[:function] || tool + proxy = RubyLLM::ToolProxy.new( + name: func[:name], + description: func[:description] || "", + parameters: func[:parameters] || {} + ) + hash[proxy.name] = proxy + end + end + + # Converts a RubyLLM::Message response to a normalized hash. + # + # @param response [RubyLLM::Message] the response message + # @param model_id [String, nil] the model used + # @return [Hash] normalized response hash + def normalize_ruby_llm_response(response, model_id) + hash = { + role: "assistant", + content: response.content.to_s + } + + # Handle tool calls + if response.tool_calls&.any? + hash[:tool_calls] = response.tool_calls.map do |id, tc| + { + id: id, + type: "function", + function: { + name: tc.name, + arguments: tc.arguments.is_a?(String) ? tc.arguments : tc.arguments.to_json + } + } + end + end + + # Add stop_reason if available + if response.respond_to?(:stop_reason) && response.stop_reason + hash[:stop_reason] = response.stop_reason + elsif response.tool_calls&.any? + hash[:stop_reason] = "tool_use" + else + hash[:stop_reason] = "end_turn" + end + + # Add usage info if available + if response.respond_to?(:input_tokens) && response.input_tokens + hash[:usage] = { + input_tokens: response.input_tokens, + output_tokens: response.output_tokens + } + end + + hash[:model] = model_id if model_id + + hash + end + end + end +end diff --git a/test/dummy/config/active_agent.yml b/test/dummy/config/active_agent.yml index c4144c1f..7778309d 100644 --- a/test/dummy/config/active_agent.yml +++ b/test/dummy/config/active_agent.yml @@ -29,6 +29,10 @@ mock: &mock service: "Mock" model: "mock-model" # endregion mock_anchor +# region ruby_llm_anchor +ruby_llm: &ruby_llm + service: "RubyLLM" +# endregion ruby_llm_anchor # endregion config_anchors # region config_development @@ -55,6 +59,10 @@ development: mock: <<: *mock # endregion mock_dev_config + # region ruby_llm_dev_config + ruby_llm: + <<: *ruby_llm + # endregion ruby_llm_dev_config # endregion config_development # region config_test @@ -71,4 +79,6 @@ test: <<: *anthropic mock: <<: *mock + ruby_llm: + <<: *ruby_llm # endregion config_test diff --git a/test/providers/ruby_llm/integration_test.rb b/test/providers/ruby_llm/integration_test.rb new file mode 100644 index 00000000..e780557e --- /dev/null +++ b/test/providers/ruby_llm/integration_test.rb @@ -0,0 +1,159 @@ +# frozen_string_literal: true + +# Integration test for the RubyLLM provider against real OpenAI API. +# +# Run with: +# BUNDLE_GEMFILE=gemfiles/rails8.gemfile mise exec ruby@3.3.10 -- bundle exec ruby -Itest test/providers/ruby_llm/integration_test.rb +# +# Requires: OPENAI_API_KEY env var set. + +unless ENV["OPENAI_API_KEY"] + # When run via rake test without the key, define a no-op test class. + require "test_helper" + + class RubyLLMIntegrationTest < ActiveSupport::TestCase + test "skipped - OPENAI_API_KEY not set" do + skip "Set OPENAI_API_KEY to run RubyLLM integration tests" + end + end + + return +end + +require "test_helper" +require "ruby_llm" + +# Configure RubyLLM with real API key +RubyLLM.configure do |config| + config.openai_api_key = ENV.fetch("OPENAI_API_KEY") +end + +# Allow real HTTP connections for integration tests +if defined?(VCR) + VCR.configure do |config| + config.allow_http_connections_when_no_cassette = true + end +end + +if defined?(WebMock) + WebMock.allow_net_connect! +end + +require "active_agent/providers/ruby_llm_provider" + +class RubyLLMIntegrationTest < ActiveSupport::TestCase + test "basic non-streaming prompt" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "Reply with exactly two words: hello world" } ] + ) + + response = provider.prompt + + assert_not_nil response + assert response.messages.any? + + content = response.messages.last.content + assert content.present?, "Expected non-empty content" + assert_match(/hello world/i, content) + + assert_not_nil response.usage + assert response.usage.input_tokens.to_i > 0 + assert response.usage.output_tokens.to_i > 0 + end + + test "streaming prompt" do + events = [] + + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "Reply with exactly two words: streaming works" } ], + stream: true, + stream_broadcaster: ->(message, delta, event_type) { + events << { type: event_type, delta: delta } + } + ) + + provider.prompt + + assert events.any? { |e| e[:type] == :open }, "Missing open event" + assert events.any? { |e| e[:type] == :update }, "Missing update events" + assert events.any? { |e| e[:type] == :close }, "Missing close event" + + deltas = events.select { |e| e[:type] == :update && e[:delta] }.map { |e| e[:delta] } + full_text = deltas.join + assert full_text.present?, "Streamed content should not be empty" + end + + test "tool calling" do + tool_calls_received = [] + + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "What is the weather in Boston? Use the get_weather tool." } ], + tools: [ + { + type: "function", + function: { + name: "get_weather", + description: "Get the current weather for a given location", + parameters: { + type: "object", + properties: { + location: { type: "string", description: "City name" } + }, + required: [ "location" ] + } + } + } + ], + tools_function: ->(name, **kwargs) { + tool_calls_received << { name: name, kwargs: kwargs } + { temperature: 72, condition: "sunny", unit: "fahrenheit" } + } + ) + + response = provider.prompt + + assert tool_calls_received.any?, "Tool should have been called" + assert_equal "get_weather", tool_calls_received.first[:name] + + content = response.messages.last.content + assert content.present?, "Final response should not be empty" + end + + test "embedding" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "text-embedding-3-small", + input: "The quick brown fox" + ) + + response = provider.embed + + assert_not_nil response + assert response.data.any? + + embedding = response.data.first[:embedding] + assert_kind_of Array, embedding + assert_equal 1536, embedding.size + end + + test "system instructions" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + instructions: "You must always respond with exactly the word 'PINEAPPLE' and nothing else.", + messages: [ { role: "user", content: "Say something" } ] + ) + + response = provider.prompt + content = response.messages.last.content + + assert content.present? + assert_match(/pineapple/i, content) + end +end diff --git a/test/providers/ruby_llm/ruby_llm_provider_test.rb b/test/providers/ruby_llm/ruby_llm_provider_test.rb new file mode 100644 index 00000000..f5ad06a2 --- /dev/null +++ b/test/providers/ruby_llm/ruby_llm_provider_test.rb @@ -0,0 +1,1112 @@ +# frozen_string_literal: true + +require "test_helper" + +# Stub RubyLLM gem classes for testing without the gem installed. +# These stubs match the real RubyLLM API surface used by the provider. +# +# When the real ruby_llm gem is already loaded (e.g. integration tests ran +# first), we skip defining stubs and fall through to StubProvider only. +unless defined?(::RubyLLM::Models) + module ::RubyLLM + def self.config + @config ||= Struct.new(:openai_api_key).new("test") + end + + class Message + attr_accessor :role, :content, :tool_calls, :tool_call_id, + :input_tokens, :output_tokens, :stop_reason + + def initialize(role:, content: nil, tool_calls: nil, tool_call_id: nil, **_kwargs) + @role = role + @content = content + @tool_calls = tool_calls + @tool_call_id = tool_call_id + @input_tokens = nil + @output_tokens = nil + @stop_reason = nil + end + + def tool_call? + tool_calls&.any? + end + end + + class ToolCall + attr_accessor :id, :name, :arguments + + def initialize(id:, name:, arguments: "{}") + @id = id + @name = name + @arguments = arguments + end + end + + class Chunk + attr_accessor :content, :tool_calls, :finish_reason + + def initialize(content: nil, tool_calls: nil, finish_reason: nil) + @content = content + @tool_calls = tool_calls + @finish_reason = finish_reason + end + end + + module Model + class Info + attr_reader :id, :provider + def initialize(data) + data = { id: data } if data.is_a?(String) + @id = data[:id] + @provider = data[:provider] || "openai" + end + end + end + + class Embedding + attr_accessor :vectors + + def initialize(vectors:) + @vectors = vectors + end + end + + # Use a module so it doesn't conflict with the real gem's class Models + module Models + @default_provider = nil + + def self.resolve(model_id, **_kwargs) + [ Model::Info.new(id: model_id, provider: "openai"), @default_provider ] + end + + def self.default_provider + @default_provider + end + + def self.default_provider=(provider) + @default_provider = provider + end + end + end +end + +# StubProvider lives outside the guard so it's always available for tests, +# whether the real gem is loaded or not. +module ::RubyLLM + class StubProvider + attr_reader :last_messages, :last_kwargs + + def complete(messages, tools:, temperature:, model:, **kwargs, &block) + @last_messages = messages + @last_kwargs = { tools: tools, temperature: temperature, model: model }.merge(kwargs) + + if block_given? + block.call(Chunk.new(content: "Hello ")) + block.call(Chunk.new(content: "world")) + block.call(Chunk.new(content: nil, finish_reason: "stop")) + nil + else + msg = Message.new(role: :assistant, content: "Hello from RubyLLM") + msg.input_tokens = 10 + msg.output_tokens = 5 + msg + end + end + + def embed(text, model:, dimensions:) + Embedding.new(vectors: Array.new(1536) { rand * 2 - 1 }) + end + end + + # Set default provider for stub Models (no-op if real gem is loaded) + if defined?(::RubyLLM::Models) && ::RubyLLM::Models.respond_to?(:default_provider=) + ::RubyLLM::Models.default_provider = StubProvider.new + end +end + +# RubyLLM stubs are defined above, so require_gem! will be skipped +# via the `unless defined?(::RubyLLM)` guard in the provider file. +require "active_agent/providers/ruby_llm_provider" + +class RubyLLMProviderTest < ActiveSupport::TestCase + setup do + @provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ + { role: "user", content: "Hello world" } + ] + ) + end + + # --- Basic provider setup --- + + test "service_name returns RubyLLM" do + assert_equal "RubyLLM", @provider.service_name + end + + test "provider initializes with valid config" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "claude-3-5-haiku", + temperature: 0.7, + messages: [ + { role: "user", content: "Hello" } + ] + ) + + assert_not_nil provider + assert_equal "RubyLLM", provider.service_name + end + + test "provider initializes with max_tokens option" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + max_tokens: 1024, + messages: [ { role: "user", content: "Hello" } ] + ) + + assert_not_nil provider + end + + # --- Non-streaming prompts --- + + test "non-streaming prompt returns proper response structure" do + response = @provider.prompt + + assert_not_nil response + assert_not_nil response.raw_request + assert_not_nil response.raw_response + assert response.messages.size >= 1 + + message = response.messages.last + assert_equal "assistant", message.role + assert_equal "Hello from RubyLLM", message.content + end + + test "prompt includes usage data" do + response = @provider.prompt + + assert_not_nil response.usage + assert_equal 10, response.usage.input_tokens + assert_equal 5, response.usage.output_tokens + end + + test "prompt includes stop_reason in raw_response" do + response = @provider.prompt + + assert_equal "end_turn", response.raw_response[:stop_reason] + end + + test "prompt includes model in raw_response" do + response = @provider.prompt + + assert_equal "gpt-4o-mini", response.raw_response[:model] + end + + # --- Streaming --- + + test "streaming broadcasts open, update, and close events" do + stream_events = [] + + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "hello" } ], + stream: true, + stream_broadcaster: ->(message, delta, event_type) { + stream_events << { message: message, delta: delta, type: event_type } + } + ) + + provider.prompt + + assert stream_events.any? { |e| e[:type] == :open } + assert stream_events.any? { |e| e[:type] == :update } + assert stream_events.any? { |e| e[:type] == :close } + end + + test "streaming accumulates content from chunks" do + stream_events = [] + + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "hello" } ], + stream: true, + stream_broadcaster: ->(message, delta, event_type) { + stream_events << { message: message&.dup, delta: delta, type: event_type } + } + ) + + provider.prompt + + updates = stream_events.select { |e| e[:type] == :update } + assert_equal "Hello ", updates.first[:delta] + assert_equal "world", updates.last[:delta] + end + + test "streaming with tool calls accumulates tool_calls" do + streaming_tool_provider = Class.new(::RubyLLM::StubProvider) do + def initialize + @call_count = 0 + end + + def complete(messages, **kwargs, &block) + @call_count += 1 + + if block_given? && @call_count == 1 + # Stream a tool call across chunks + block.call(::RubyLLM::Chunk.new( + tool_calls: { + "call_1" => ::RubyLLM::ToolCall.new( + id: "call_1", + name: "get_weather", + arguments: '{"location":' + ) + } + )) + block.call(::RubyLLM::Chunk.new( + tool_calls: { + "call_1" => ::RubyLLM::ToolCall.new( + id: "call_1", + name: "get_weather", + arguments: '"Boston"}' + ) + } + )) + block.call(::RubyLLM::Chunk.new(finish_reason: "tool_calls")) + nil + elsif block_given? + # Second streaming call after tool results - return text + block.call(::RubyLLM::Chunk.new(content: "It's 72F in Boston.")) + block.call(::RubyLLM::Chunk.new(finish_reason: "stop")) + nil + else + msg = ::RubyLLM::Message.new(role: :assistant, content: "It's 72F in Boston.") + msg.input_tokens = 10 + msg.output_tokens = 5 + msg + end + end + end + + with_custom_provider(streaming_tool_provider.new) do + stream_events = [] + + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "weather?" } ], + stream: true, + tools: [ tool_definition("get_weather") ], + tools_function: ->(_name, **_kwargs) { { temp: 72 } }, + stream_broadcaster: ->(message, delta, event_type) { + stream_events << { message: message&.deep_dup, type: event_type } + } + ) + + provider.prompt + + # Check that the streamed message accumulated the tool call arguments + assert stream_events.any? { |e| e[:type] == :open } + assert stream_events.any? { |e| e[:type] == :close } + end + end + + # --- Embeddings --- + + test "embedding returns proper response structure" do + embed_provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + input: "test text", + model: "text-embedding-3-small" + ) + + response = embed_provider.embed + + assert_not_nil response + assert_not_nil response.data + assert_equal 1, response.data.size + + embedding = response.data.first + assert_equal "embedding", embedding[:object] + assert_equal 1536, embedding[:embedding].size + end + + test "multiple embeddings return proper structure" do + embed_provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + input: [ "first text", "second text" ], + model: "text-embedding-3-small" + ) + + response = embed_provider.embed + + assert_equal 2, response.data.size + assert_equal 0, response.data[0][:index] + assert_equal 1, response.data[1][:index] + end + + # --- Tool calling --- + + test "tool call extraction works correctly with multi-turn" do + tool_call_provider = Class.new(::RubyLLM::StubProvider) do + def initialize + @call_count = 0 + end + + def complete(messages, **kwargs) + @call_count += 1 + + if @call_count == 1 + msg = ::RubyLLM::Message.new(role: :assistant, content: "") + msg.tool_calls = { + "call_123" => ::RubyLLM::ToolCall.new( + id: "call_123", + name: "get_weather", + arguments: '{"location":"Boston"}' + ) + } + msg.input_tokens = 10 + msg.output_tokens = 5 + msg + else + msg = ::RubyLLM::Message.new(role: :assistant, content: "The weather in Boston is sunny and 72F.") + msg.input_tokens = 20 + msg.output_tokens = 10 + msg + end + end + end + + with_custom_provider(tool_call_provider.new) do + tool_calls_received = [] + + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "What is the weather in Boston?" } ], + tools: [ tool_definition("get_weather") ], + tools_function: ->(name, **kwargs) { + tool_calls_received << { name: name, kwargs: kwargs } + { temperature: 72, condition: "sunny" } + } + ) + + provider.prompt + + assert_equal 1, tool_calls_received.size + assert_equal "get_weather", tool_calls_received.first[:name] + assert_equal({ location: "Boston" }, tool_calls_received.first[:kwargs]) + end + end + + test "tool call response sets stop_reason to tool_use" do + tool_call_provider = Class.new(::RubyLLM::StubProvider) do + def initialize + @call_count = 0 + end + + def complete(messages, **kwargs) + @call_count += 1 + + if @call_count == 1 + msg = ::RubyLLM::Message.new(role: :assistant, content: "") + msg.tool_calls = { + "call_1" => ::RubyLLM::ToolCall.new(id: "call_1", name: "test_tool", arguments: "{}") + } + msg.input_tokens = 5 + msg.output_tokens = 3 + msg + else + msg = ::RubyLLM::Message.new(role: :assistant, content: "Done.") + msg.input_tokens = 10 + msg.output_tokens = 5 + msg + end + end + end + + with_custom_provider(tool_call_provider.new) do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "do it" } ], + tools: [ tool_definition("test_tool") ], + tools_function: ->(_name, **_kwargs) { "ok" } + ) + + response = provider.prompt + # Final response should be end_turn since it's the completion + assert_equal "end_turn", response.raw_response[:stop_reason] + end + end + + test "tool call with hash arguments works" do + tool_call_provider = Class.new(::RubyLLM::StubProvider) do + def initialize + @call_count = 0 + end + + def complete(messages, **kwargs) + @call_count += 1 + + if @call_count == 1 + msg = ::RubyLLM::Message.new(role: :assistant, content: "") + msg.tool_calls = { + "call_1" => ::RubyLLM::ToolCall.new( + id: "call_1", + name: "search", + arguments: { query: "test" } # Hash instead of string + ) + } + msg.input_tokens = 5 + msg.output_tokens = 3 + msg + else + msg = ::RubyLLM::Message.new(role: :assistant, content: "Found it.") + msg.input_tokens = 10 + msg.output_tokens = 5 + msg + end + end + end + + with_custom_provider(tool_call_provider.new) do + tool_calls_received = [] + + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "search" } ], + tools: [ tool_definition("search") ], + tools_function: ->(name, **kwargs) { + tool_calls_received << { name: name, kwargs: kwargs } + "result" + } + ) + + provider.prompt + + assert_equal 1, tool_calls_received.size + assert_equal "search", tool_calls_received.first[:name] + end + end + + # --- ToolChoiceClearing --- + + test "tool_choice_forces_required? returns true for required" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + tool_choice: "required", + messages: [ { role: "user", content: "test" } ], + tools: [ tool_definition("test_tool") ], + tools_function: ->(_name, **_kwargs) { "ok" } + ) + + # Initialize request so tool_choice is accessible + provider.send(:request=, provider.send(:prompt_request_type).cast( + tool_choice: "required", model: "gpt-4o-mini", + messages: [ { role: "user", content: "test" } ] + )) + + assert provider.send(:tool_choice_forces_required?) + end + + test "tool_choice_forces_required? returns false for auto" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + tool_choice: "auto", + messages: [ { role: "user", content: "test" } ], + tools: [ tool_definition("test_tool") ], + tools_function: ->(_name, **_kwargs) { "ok" } + ) + + provider.send(:request=, provider.send(:prompt_request_type).cast( + tool_choice: "auto", model: "gpt-4o-mini", + messages: [ { role: "user", content: "test" } ] + )) + + refute provider.send(:tool_choice_forces_required?) + end + + test "tool_choice_forces_specific? returns tool name for hash" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + tool_choice: { name: "get_weather" }, + messages: [ { role: "user", content: "test" } ], + tools: [ tool_definition("get_weather") ], + tools_function: ->(_name, **_kwargs) { "ok" } + ) + + provider.send(:request=, provider.send(:prompt_request_type).cast( + tool_choice: { name: "get_weather" }, model: "gpt-4o-mini", + messages: [ { role: "user", content: "test" } ] + )) + + forces, name = provider.send(:tool_choice_forces_specific?) + assert forces + assert_equal "get_weather", name + end + + test "tool_choice_forces_specific? returns false for string" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + tool_choice: "auto", + messages: [ { role: "user", content: "test" } ], + tools: [ tool_definition("test_tool") ], + tools_function: ->(_name, **_kwargs) { "ok" } + ) + + provider.send(:request=, provider.send(:prompt_request_type).cast( + tool_choice: "auto", model: "gpt-4o-mini", + messages: [ { role: "user", content: "test" } ] + )) + + forces, name = provider.send(:tool_choice_forces_specific?) + refute forces + assert_nil name + end + + test "extract_used_function_names returns names from message_stack" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "test" } ] + ) + + # Simulate message_stack with assistant tool calls + provider.send(:message_stack).push( + { + role: "assistant", + content: "", + tool_calls: [ + { id: "call_1", type: "function", function: { name: "get_weather", arguments: "{}" } }, + { id: "call_2", type: "function", function: { name: "get_time", arguments: "{}" } } + ] + } + ) + + names = provider.send(:extract_used_function_names) + assert_includes names, "get_weather" + assert_includes names, "get_time" + assert_equal 2, names.size + end + + test "extract_used_function_names returns empty array when no tool calls" do + names = @provider.send(:extract_used_function_names) + assert_equal [], names + end + + test "prepare_prompt_request clears tool_choice after forced tool is used" do + tool_call_provider = Class.new(::RubyLLM::StubProvider) do + def initialize + @call_count = 0 + end + + def complete(messages, **kwargs) + @call_count += 1 + + if @call_count == 1 + msg = ::RubyLLM::Message.new(role: :assistant, content: "") + msg.tool_calls = { + "call_1" => ::RubyLLM::ToolCall.new(id: "call_1", name: "get_weather", arguments: '{"location":"NYC"}') + } + msg.input_tokens = 5 + msg.output_tokens = 3 + msg + else + msg = ::RubyLLM::Message.new(role: :assistant, content: "72F in NYC.") + msg.input_tokens = 10 + msg.output_tokens = 5 + msg + end + end + end + + with_custom_provider(tool_call_provider.new) do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + tool_choice: "required", + messages: [ { role: "user", content: "weather?" } ], + tools: [ tool_definition("get_weather") ], + tools_function: ->(_name, **_kwargs) { { temp: 72 } } + ) + + provider.prompt + + # After the tool was used and the second turn ran, tool_choice + # should have been cleared by prepare_prompt_request_tools + assert_nil provider.send(:request).tool_choice + end + end + + # --- System messages --- + + test "handles system instructions" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + instructions: "You are a helpful assistant.", + messages: [ + { role: "user", content: "Hello" } + ] + ) + + response = provider.prompt + + assert_not_nil response + assert_equal "Hello from RubyLLM", response.messages.last.content + end + + test "system role messages in messages array are handled" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ + { role: "system", content: "You are helpful." }, + { role: "user", content: "Hello" } + ] + ) + + response = provider.prompt + + assert_not_nil response + # Should have system + user + assistant messages + assert response.messages.size >= 2 + end + + # --- Multi-turn conversation --- + + test "multi-turn conversation with existing history" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ + { role: "user", content: "What is 2+2?" }, + { role: "assistant", content: "4" }, + { role: "user", content: "And 3+3?" } + ] + ) + + response = provider.prompt + + assert_not_nil response + # Should include all conversation history plus new assistant response + assert response.messages.size >= 3 + end + + test "conversation with tool_calls in history" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ + { role: "user", content: "Get weather" }, + { + role: "assistant", content: "", + tool_calls: [ + { id: "call_1", type: "function", function: { name: "get_weather", arguments: '{"location":"NYC"}' } } + ] + }, + { role: "tool", content: '{"temp":72}', tool_call_id: "call_1" }, + { role: "assistant", content: "It's 72F in NYC." }, + { role: "user", content: "What about Boston?" } + ] + ) + + response = provider.prompt + + assert_not_nil response + assert response.messages.size >= 5 + end + + # --- Array content format --- + + test "array content with text blocks is extracted" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ + { + role: "user", + content: [ + { type: "text", text: "Hello" }, + { type: "text", text: "World" } + ] + } + ] + ) + + response = provider.prompt + + assert_not_nil response + end + + test "array content filters non-text blocks" do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ + { + role: "user", + content: [ + { type: "image_url", image_url: { url: "data:image/png;base64,..." } }, + { type: "text", text: "Describe this image" } + ] + } + ] + ) + + response = provider.prompt + + assert_not_nil response + end + + # --- Empty/nil tool_calls --- + + test "empty tool_calls in response does not break extraction" do + empty_tool_provider = Class.new(::RubyLLM::StubProvider) do + def complete(messages, **kwargs) + msg = ::RubyLLM::Message.new(role: :assistant, content: "No tools needed.") + msg.tool_calls = {} + msg.input_tokens = 5 + msg.output_tokens = 3 + msg + end + end + + with_custom_provider(empty_tool_provider.new) do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "hello" } ] + ) + + response = provider.prompt + + assert_not_nil response + assert_equal "No tools needed.", response.messages.last.content + end + end + + test "nil tool_calls in response does not break extraction" do + nil_tool_provider = Class.new(::RubyLLM::StubProvider) do + def complete(messages, **kwargs) + msg = ::RubyLLM::Message.new(role: :assistant, content: "Just text.") + msg.tool_calls = nil + msg.input_tokens = 5 + msg.output_tokens = 3 + msg + end + end + + with_custom_provider(nil_tool_provider.new) do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "hello" } ] + ) + + response = provider.prompt + + assert_not_nil response + assert_equal "Just text.", response.messages.last.content + end + end + + # --- Provider caching --- + + test "provider is cached across multi-turn calls" do + resolve_call_count = 0 + + tool_call_provider = Class.new(::RubyLLM::StubProvider) do + def initialize + @call_count = 0 + end + + def complete(messages, **kwargs) + @call_count += 1 + + if @call_count == 1 + msg = ::RubyLLM::Message.new(role: :assistant, content: "") + msg.tool_calls = { + "call_1" => ::RubyLLM::ToolCall.new(id: "call_1", name: "test", arguments: "{}") + } + msg.input_tokens = 5 + msg.output_tokens = 3 + msg + else + msg = ::RubyLLM::Message.new(role: :assistant, content: "Done.") + msg.input_tokens = 5 + msg.output_tokens = 3 + msg + end + end + end + + custom = tool_call_provider.new + counting_resolve = ->(model_id, **kwargs) { + resolve_call_count += 1 + [ stub_model_info(model_id), custom ] + } + + ::RubyLLM::Models.stub(:resolve, counting_resolve) do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "test" } ], + tools: [ tool_definition("test") ], + tools_function: ->(_name, **_kwargs) { "ok" } + ) + + provider.prompt + + # Models.resolve should only be called once despite multi-turn + assert_equal 1, resolve_call_count + end + end + + # --- stop_reason from RubyLLM response --- + + test "stop_reason from RubyLLM response is preserved" do + stop_reason_provider = Class.new(::RubyLLM::StubProvider) do + def complete(messages, **kwargs) + msg = ::RubyLLM::Message.new(role: :assistant, content: "Truncated") + msg.stop_reason = "length" + msg.input_tokens = 5 + msg.output_tokens = 100 + msg + end + end + + with_custom_provider(stop_reason_provider.new) do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "write a long essay" } ] + ) + + response = provider.prompt + + assert_equal "length", response.raw_response[:stop_reason] + end + end + + # --- Error handling --- + + test "handles error from RubyLLM Models.resolve" do + ::RubyLLM::Models.stub(:resolve, ->(*_) { raise StandardError, "Model not found" }) do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "nonexistent-model", + messages: [ { role: "user", content: "hello" } ] + ) + + assert_raises(StandardError) { provider.prompt } + end + end + + test "handles error from provider.complete" do + error_provider = Class.new(::RubyLLM::StubProvider) do + def complete(messages, **kwargs) + raise StandardError, "API error" + end + end + + with_custom_provider(error_provider.new) do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + messages: [ { role: "user", content: "hello" } ] + ) + + assert_raises(StandardError) { provider.prompt } + end + end + + # --- max_tokens pass-through --- + + test "max_tokens is passed to provider via params" do + capturing_provider = Class.new(::RubyLLM::StubProvider) do + def complete(messages, **kwargs) + @last_kwargs = kwargs + msg = ::RubyLLM::Message.new(role: :assistant, content: "Short.") + msg.input_tokens = 5 + msg.output_tokens = 2 + msg + end + + def last_kwargs + @last_kwargs + end + end + + custom = capturing_provider.new + with_custom_provider(custom) do + provider = ActiveAgent::Providers::RubyLLMProvider.new( + service: "RubyLLM", + model: "gpt-4o-mini", + max_tokens: 256, + messages: [ { role: "user", content: "hello" } ] + ) + + provider.prompt + + assert_equal({ max_tokens: 256 }, custom.last_kwargs[:params]) + end + end + + test "max_tokens is not passed when not set" do + response = @provider.prompt + # Default StubProvider captures kwargs - params should not be set + stub_provider = ::RubyLLM::Models.default_provider + assert_nil stub_provider.last_kwargs[:params] + end + + # --- ToolProxy conversion --- + + test "build_ruby_llm_tools converts common format tools" do + tools = [ + { + type: "function", + function: { + name: "get_weather", + description: "Get weather for a location", + parameters: { + type: "object", + properties: { location: { type: "string" } }, + required: [ "location" ] + } + } + } + ] + + result = @provider.send(:build_ruby_llm_tools, tools) + + assert_equal 1, result.size + assert result.key?("get_weather") + + proxy = result["get_weather"] + assert_equal "get_weather", proxy.name + assert_equal "Get weather for a location", proxy.description + assert_equal "object", proxy.parameters[:type] + end + + test "build_ruby_llm_tools converts flat format tools" do + tools = [ + { + name: "search", + description: "Search for items", + parameters: { type: "object", properties: { query: { type: "string" } } } + } + ] + + result = @provider.send(:build_ruby_llm_tools, tools) + + assert_equal 1, result.size + assert_equal "search", result["search"].name + end + + test "build_ruby_llm_tools returns nil for empty tools" do + assert_nil @provider.send(:build_ruby_llm_tools, nil) + assert_nil @provider.send(:build_ruby_llm_tools, []) + end + + test "ToolProxy params_schema returns string-keyed JSON schema" do + proxy = ActiveAgent::Providers::RubyLLM::ToolProxy.new( + name: "test", + description: "A test tool", + parameters: { + type: "object", + properties: { location: { type: "string", description: "City" } }, + required: [ "location" ] + } + ) + + schema = proxy.params_schema + assert_kind_of Hash, schema + assert_equal "object", schema["type"] + assert_equal "string", schema.dig("properties", "location", "type") + assert_equal [ "location" ], schema["required"] + end + + test "ToolProxy params_schema returns nil for empty parameters" do + proxy = ActiveAgent::Providers::RubyLLM::ToolProxy.new( + name: "test", + description: "No params", + parameters: {} + ) + + assert_nil proxy.params_schema + end + + # --- convert_tool_calls_for_ruby_llm --- + + test "convert_tool_calls_for_ruby_llm converts function format" do + tool_calls = [ + { + id: "call_1", + type: "function", + function: { name: "get_weather", arguments: '{"location":"NYC"}' } + } + ] + + result = @provider.send(:convert_tool_calls_for_ruby_llm, tool_calls) + + assert_equal 1, result.size + assert result.key?("call_1") + assert_equal "get_weather", result["call_1"].name + assert_equal '{"location":"NYC"}', result["call_1"].arguments + end + + test "convert_tool_calls_for_ruby_llm converts flat format" do + tool_calls = [ + { id: "call_1", name: "search", input: { query: "test" } } + ] + + result = @provider.send(:convert_tool_calls_for_ruby_llm, tool_calls) + + assert_equal 1, result.size + assert_equal "search", result["call_1"].name + end + + test "convert_tool_calls_for_ruby_llm returns nil for nil" do + assert_nil @provider.send(:convert_tool_calls_for_ruby_llm, nil) + end + + private + + # Helper to swap the provider returned by Models.resolve for a test block. + def with_custom_provider(provider_instance, &block) + resolve_stub = ->(model_id, **_kwargs) { [ stub_model_info(model_id), provider_instance ] } + ::RubyLLM::Models.stub(:resolve, resolve_stub, &block) + end + + # Build a Model::Info compatible with both stub and real gem. + def stub_model_info(model_id) + ::RubyLLM::Model::Info.new(id: model_id, provider: "openai") + end + + def tool_definition(name, description: "A test tool", parameters: nil) + { + type: "function", + function: { + name: name, + description: description, + parameters: parameters || { + type: "object", + properties: { + location: { type: "string", description: "Location" } + } + } + } + } + end +end