You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update documentation and improve model finding logic
Fixes#68
- Added a new section on working with models in the guides.
- Enhanced model finding logic to prioritize exact matches over aliases.
- Updated links in the documentation for consistency.
Copy file name to clipboardExpand all lines: docs/guides/models.md
+82-27Lines changed: 82 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,86 @@ permalink: /guides/models
10
10
11
11
RubyLLM provides a clean interface for discovering and working with AI models from multiple providers. This guide explains how to find, filter, and select the right model for your needs.
12
12
13
+
## Finding Models
14
+
15
+
### Basic Model Selection
16
+
17
+
The simplest way to use a model is to specify it when creating a chat:
18
+
19
+
```ruby
20
+
# Use the default model
21
+
chat =RubyLLM.chat
22
+
23
+
# Specify a model
24
+
chat =RubyLLM.chat(model:'gpt-4o-mini')
25
+
26
+
# Change models mid-conversation
27
+
chat.with_model('claude-3-5-sonnet')
28
+
```
29
+
30
+
### Model Resolution
31
+
32
+
{: .warning-title }
33
+
> Coming in v1.1.0
34
+
>
35
+
> Provider-Specific Match and Alias Resolution will be available in the next release.
36
+
37
+
When you specify a model, RubyLLM follows these steps to find it:
38
+
39
+
1.**Exact Match**: First tries to find an exact match for the model ID
40
+
```ruby
41
+
# Uses the actual gemini-2.0-flash model
42
+
chat =RubyLLM.chat(model:'gemini-2.0-flash')
43
+
```
44
+
45
+
2.**Provider-Specific Match**: If a provider is specified, looks for an exact match in that provider
0 commit comments