Conversation
|
Thank you @OriPekelman for your contribution! Two new features to consider in your provider implementation:
Docs: https://rubyllm.com/guides/models#using-model-aliases Please ask me for a code review when you're ready. |
|
Added configuration requirements handling in 75f99a1 Each provider now specifies what configuration is required via a simple
Example of the new error messages: RubyLLM::ConfigurationError: anthropic provider is not configured. Add this to your initialization:
RubyLLM.configure do |config|
config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
end |
|
@OriPekelman is this still on your radar? I'd love to merge Mistral support soon. Whenever you are ready, could you resolve the conflicts and rerequest a review? Thanks! |
|
Yup, I'll try to get to it very soon. |
|
Hi @OriPekelman is Mistral OpenAI compatible? |
Well, like most providers, it has some basic OpenAI compatability ... but only for chat completions - and even there you are going to have quite a bit of differences in tool calling and multi-turn behaviour. I rebased the whole thing and cleaned it up quite a bit + added the There are a number of tests I had to skip (for example Mistral does not support custom dimensions for embedding, the tool calling in multi-turn conversations doesn't support adding tools mid conversation, only the last chunk in a stream would have token count). Hopefully this is useful. |
crmne
left a comment
There was a problem hiding this comment.
Good stuff! I left you some comments
| def completion_url | ||
| "#{Mistral.api_base(RubyLLM.config)}/chat/completions" | ||
| end |
There was a problem hiding this comment.
you don't need to pass the api base here
There was a problem hiding this comment.
Check out the other provider's code. You only need the path part of the url.
| Array(tools) | ||
| end | ||
|
|
||
| puts "\n[DEBUG] Available tools: #{tools_array&.map { |t| t.name.to_s }}" if ENV["DEBUG"] |
| "none" | ||
| end | ||
|
|
||
| puts "[DEBUG] Tool choice: #{effective_tool_choice.inspect}" if ENV["DEBUG"] |
| frequency_penalty: frequency_penalty, | ||
| }.compact | ||
|
|
||
| puts "[DEBUG] Full payload: #{payload.inspect}" if ENV["DEBUG"] |
| arguments: tool_call.arguments, | ||
| }, | ||
| } | ||
| puts "[DEBUG] Rendered tool call: #{tool_call_spec.inspect}" if ENV["DEBUG"] |
spec/ruby_llm/chat_spec.rb
Outdated
|
|
||
| it "#{provider}/#{model} successfully uses the system prompt" do | ||
| skip 'System prompt can be flaky for Ollama models' if provider == :ollama | ||
| skip 'Mistral API does not allow system messages after assistant messages' if provider == :mistral |
There was a problem hiding this comment.
but in this case the chat should start with the system message, right?
| skip 'Mistral API only returns token count on the last chunk.' if provider == :mistral | ||
|
|
There was a problem hiding this comment.
also the other APIs, so don't worry
spec/ruby_llm/chat_tools_spec.rb
Outdated
| model = model_info[:model] | ||
| provider = model_info[:provider] | ||
| it "#{provider}/#{model} can use tools" do # rubocop:disable RSpec/MultipleExpectations | ||
| skip 'Mistral does not reliably support tool usage' if provider == :mistral |
There was a problem hiding this comment.
really? that would be super bad! in what way is it not reliable?
There was a problem hiding this comment.
OK I addressed the mistral too hallucination issue - from what I can see it only happens with the small model. At any rate it no longer breaks the implementation. It tends to believe it has a "web_search" tool whether you have one or not.
spec/ruby_llm/embeddings_spec.rb
Outdated
| end | ||
|
|
||
| it "#{provider}/#{model} can handle a single text with custom dimensions" do # rubocop:disable RSpec/MultipleExpectations | ||
| skip "Mistral embed does not support custom dimensions" if model == "mistral-embed" |
There was a problem hiding this comment.
you can do if provider == :mistral
There was a problem hiding this comment.
Went for provider, but if I am not mistaken it is a string here.
spec/ruby_llm/embeddings_spec.rb
Outdated
| end | ||
|
|
||
| it "#{provider}/#{model} can handle multiple texts with custom dimensions" do # rubocop:disable RSpec/MultipleExpectations | ||
| skip "Mistral embed does not support custom dimensions" if model == "mistral-embed" |
|
Oh and now i'll try to do a clean rebase. |
|
I see there are some new tests, I'll see if I can implement the local image stuff. |
|
OK this should work now (given that models.json is up-to-date) for the failing vision tests. |
|
OK rebased on main I did not commit the changes to models.json (as it is a source of possible merge conflicts) - but the update seems to work .. and the tests look green after I run |
crmne
left a comment
There was a problem hiding this comment.
I'm confused as to why we need so much code for the Mistral provider. Isn't it OpenAI compatible? In that case, please take a look at already existing OpenAI compatible providers like Ollama and OpenRouter.
| 'deepseek' => 'DeepSeek', | ||
| 'bedrock' => 'Bedrock', | ||
| 'openrouter' => 'OpenRouter', | ||
| 'pdf' => 'PDF' |
There was a problem hiding this comment.
OOps. That was part of me removing the PDF support for mistral (I added it with their dedicated end-point https://docs.mistral.ai/capabilities/OCR/basic_ocr/ but decided against it before the main functionality gets merged. My excuses.
lib/ruby_llm.rb
Outdated
| 'bedrock' => 'Bedrock', | ||
| 'openrouter' => 'OpenRouter', | ||
| 'pdf' => 'PDF' | ||
| 'mistral' => 'Mistral' |
There was a problem hiding this comment.
I realize we didn't need bedrock as an inflection, and so we don't need mistral as one either
There was a problem hiding this comment.
OK will remove that one too.
| def completion_url | ||
| "#{Mistral.api_base(RubyLLM.config)}/chat/completions" | ||
| end |
There was a problem hiding this comment.
Check out the other provider's code. You only need the path part of the url.
There was a problem hiding this comment.
I'm confused as to why we need so much code for the Mistral provider. Isn't it OpenAI compatible? In that case, please take a look at already existing OpenAI compatible providers like Ollama and OpenRouter.
There was a problem hiding this comment.
I guess this is a merge error, but we don't need this anymore thanks to Parsera.
There was a problem hiding this comment.
This looks like a merge error too.
…ration - Updated model aliases in aliases.json to include specific paths for Mistral models. - Added `mistral_api_base` configuration option in configuration.rb. - Introduced new models in models.json, including Mistral Tiny, Pixtral Large, and others. - Refactored Mistral provider methods to support new API structure and error handling. - Removed deprecated image generation methods from the Mistral provider. - Updated tests to reflect changes in model handling and API responses.
…chat provider - Updated debug output from `puts` to `RubyLLM.logger.debug` for better logging management. - Commented out skip conditions in tests for Mistral provider to avoid flaky behavior during testing.
… cassettes - Refactored the chat provider to be closer to the other implementations. - No longer ignoring specs except custom dimensions. - Refactored error message formatting in the Mistral provider. - Mistral (small) may hallucinate tools which explains previous failures. Added robustness for that. - Updated chat and streaming files to include necessary modules and methods. - Removed outdated VCR cassettes related to Mistral functionality. - Updated test fixtures to reflect changes in API responses and model handling. - Added frozen string literal to Mistral provider files for performance.
|
Went ahead and implemented it in 92d5451 |
Tried to follow existing implementation as much as possible and hopefully integrated correctly.