You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The OpenAI Ruby library provides convenient access to the OpenAI REST API from any Ruby 3.2.0+ application.
3
+
The OpenAI Ruby library provides convenient access to the OpenAI REST API from any Ruby 3.2.0+ application. It ships with comprehensive types & docstrings in Yard, RBS, and RBI – [see below](https://github.com/openai/openai-ruby#Sorbet) for usage with Sorbet. The standard library's `net/http` is used as the HTTP transport, with connection pooling via the `connection_pool` gem.
This library is written with [Sorbet type definitions](https://sorbet.org/docs/rbi). However, there is no runtime dependency on the `sorbet-runtime`.
43
+
### Streaming
46
44
47
-
When using sorbet, it is recommended to use model classes as below. This provides stronger type checking and tooling integration.
45
+
We provide support for streaming responses using Server-Sent Events (SSE).
48
46
49
47
```ruby
50
-
openai.chat.completions.create(
51
-
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(role:"user", content:"Say this is a test")],
48
+
stream =openai.chat.completions.stream_raw(
49
+
messages: [{role:"user", content:"Say this is a test"}],
52
50
model::"gpt-4.1"
53
51
)
52
+
53
+
stream.each do |completion|
54
+
puts(completion)
55
+
end
54
56
```
55
57
56
58
### Pagination
@@ -72,49 +74,54 @@ page.auto_paging_each do |job|
72
74
end
73
75
```
74
76
75
-
### Streaming
76
-
77
-
We provide support for streaming responses using Server-Sent Events (SSE).
77
+
Alternatively, you can use the `#next_page?` and `#next_page` methods for more granular control working with pages.
78
78
79
79
**coming soon:**`openai.chat.completions.stream` will soon come with Python SDK style higher level streaming responses support.
80
80
81
81
```ruby
82
-
stream = openai.chat.completions.stream_raw(
83
-
messages: [{role:"user", content:"Say this is a test"}],
84
-
model::"gpt-4.1"
85
-
)
86
-
87
-
stream.each do |completion|
88
-
print(completion.choices.first.delta.content)
82
+
if page.next_page?
83
+
new_page = page.next_page
84
+
puts(new_page.data[0].id)
89
85
end
90
86
```
91
87
92
88
### File uploads
93
89
94
-
Request parameters that correspond to file uploads can be passed as `StringIO`, or a [`Pathname`](https://rubyapi.org/3.2/o/pathname) instance.
90
+
Request parameters that correspond to file uploads can be passed as raw contents, a [`Pathname`](https://rubyapi.org/3.2/o/pathname) instance, [`StringIO`](https://rubyapi.org/3.2/o/stringio), or more.
95
91
96
92
```ruby
97
93
require"pathname"
98
94
99
-
#using `Pathname`, the file will be lazily read, without reading everything in to memory
95
+
#Use `Pathname` to send the filename and/or avoid paging a large file into memory:
Note that you can also pass a raw `IO` descriptor, but this disables retries, as the library can't be sure if the descriptor is a file or pipe (which cannot be rewound).
109
+
110
+
### Handling errors
110
111
111
112
When the library is unable to connect to the API, or if the API returns a non-success status code (i.e., 4xx or 5xx response), a subclass of `OpenAI::Errors::APIError` will be thrown:
By default, requests will time out after 600 seconds.
162
-
163
-
Timeouts are applied separately to the initial connection and the overall request time, so in some cases a request could wait 2\*timeout seconds before it fails.
164
-
165
-
You can use the `timeout` option to configure or disable this:
168
+
By default, requests will time out after 600 seconds. You can use the timeout option to configure or disable this:
On timeout, `OpenAI::Errors::APITimeoutError` is raised.
182
185
183
-
This library uses a simple DSL to represent request parameters and response shapes in `lib/openai/models`.
186
+
Note that requests that time out are retried by default.
184
187
185
-
With the right [editor plugins](https://shopify.github.io/ruby-lsp), you can ctrl-click on elements of the DSL to navigate around and explore the library.
188
+
## Advanced concepts
186
189
187
-
In all places where a `BaseModel` type is specified, vanilla Ruby `Hash` can also be used. For example, the following are interchangeable as arguments:
190
+
### BaseModel
188
191
189
-
```ruby
190
-
# This has tooling readability, for auto-completion, static analysis, and goto definition with supported language services
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(role:"user", content:"Say this is a test")],
193
-
model::"gpt-4.1"
194
-
)
192
+
All parameter and response objects inherit from `OpenAI::Internal::Type::BaseModel`, which provides several conveniences, including:
195
193
196
-
# This also works
197
-
params = {
198
-
messages: [{role:"user", content:"Say this is a test"}],
199
-
model::"gpt-4.1"
200
-
}
201
-
```
194
+
1. All fields, including unknown ones, are accessible with `obj[:prop]` syntax, and can be destructured with `obj => {prop: prop}` or pattern-matching syntax.
202
195
203
-
## Editor support
196
+
2. Structural equivalence for equality; if two API calls return the same values, comparing the responses with == will return true.
204
197
205
-
A combination of [Shopify LSP](https://shopify.github.io/ruby-lsp)and [Solargraph](https://solargraph.org/) is recommended for non-[Sorbet](https://sorbet.org) users. The former is especially good at go to definition, while the latter has much better auto-completion support.
198
+
3. Both instances and the classes themselves can be pretty-printed.
206
199
207
-
## Advanced concepts
200
+
4. Helpers such as `#to_h`, `#deep_to_h`, `#to_json`, and `#to_yaml`.
201
+
202
+
### Making custom or undocumented requests
203
+
204
+
#### Undocumented properties
208
205
209
-
### Making custom/undocumented requests
206
+
You can send undocumented parameters to any endpoint, and read undocumented response properties, like so:
207
+
208
+
Note: the `extra_` parameters of the same name overrides the documented parameters.
209
+
210
+
```ruby
211
+
chat_completion =
212
+
openai.chat.completions.create(
213
+
messages: [{role:"user", content:"How can I get the name of the current day in JavaScript?"}],
214
+
model::"gpt-4.1",
215
+
request_options: {
216
+
extra_query: {my_query_parameter: value},
217
+
extra_body: {my_body_parameter: value},
218
+
extra_headers: {"my-header": value}
219
+
}
220
+
)
221
+
222
+
puts(chat_completion[:my_undocumented_property])
223
+
```
210
224
211
225
#### Undocumented request params
212
226
213
-
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` under the `request_options:` parameter when making a requests as seen in examples above.
227
+
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` under the `request_options:` parameter when making a request as seen in examples above.
214
228
215
229
#### Undocumented endpoints
216
230
217
-
To make requests to undocumented endpoints, you can make requests using `client.request`. Options on the client will be respected (such as retries) when making this request.
231
+
To make requests to undocumented endpoints while retaining the benefit of auth, retries, and so on, you can make requests using `client.request`, like so:
218
232
219
233
```ruby
220
234
response = client.request(
221
235
method::post,
222
236
path:'/undocumented/endpoint',
223
237
query: {"dog": "woof"},
224
238
headers: {"useful-header": "interesting-value"},
225
-
body: {"he": "llo"},
239
+
body: {"hello": "world"}
226
240
)
227
241
```
228
242
229
243
### Concurrency & connection pooling
230
244
231
-
The `OpenAI::Client` instances are thread-safe, and should be re-used across multiple threads. By default, each `Client` have their own HTTP connection pool, with a maximum number of connections equal to thread count.
245
+
The `OpenAI::Client` instances are threadsafe, but only are fork-safe when there are no in-flight HTTP requests.
232
246
233
-
When the maximum number of connections has been checked out from the connection pool, the `Client` will wait for an in use connection to become available. The queue time for this mechanism is accounted for by the per-request timeout.
247
+
Each instance of `OpenAI::Client` has its own HTTP connection pool with a default size of 99. As such, we recommend instantiating the client once per application in most settings.
234
248
235
-
Unless otherwise specified, other classes in the SDK do not have locks protecting their underlying data structure.
249
+
When all available connections from the pool are checked out, requests wait for a new connection to become available, with queue time counting towards the request timeout.
236
250
237
-
Currently, `OpenAI::Client` instances are only fork-safe if there are no in-flight HTTP requests.
238
-
239
-
### Sorbet
251
+
Unless otherwise specified, other classes in the SDK do not have locks protecting their underlying data structure.
240
252
241
-
#### Enums
253
+
##Sorbet
242
254
243
-
Sorbet's typed enums require sub-classing of the [`T::Enum` class](https://sorbet.org/docs/tenum) from the `sorbet-runtime` gem.
255
+
This library provides comprehensive [RBI](https://sorbet.org/docs/rbi) definitions, and has no dependency on sorbet-runtime.
244
256
245
-
Since this library does not depend on `sorbet-runtime`, it uses a [`T.all` intersection type](https://sorbet.org/docs/intersection-types) with a ruby primitive type to construct a "tagged alias" instead.
257
+
You can provide typesafe request parameters like so:
246
258
247
259
```ruby
248
-
moduleOpenAI::ChatModel
249
-
# This alias aids language service driven navigation.
messages: [{role:"user", content:"Say this is a test"}],
272
+
model::"gpt-4.1"
273
+
)
274
+
275
+
# You can also splat a full Params class:
276
+
params =OpenAI::Chat::CompletionCreateParams.new(
260
277
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(role:"user", content:"Say this is a test")],
261
278
model::"gpt-4.1"
262
279
)
263
280
openai.chat.completions.create(**params)
264
281
```
265
282
283
+
### Enums
284
+
285
+
Since this library does not depend on `sorbet-runtime`, it cannot provide [`T::Enum`](https://sorbet.org/docs/tenum) instances. Instead, we provide "tagged symbols" instead, which is always a primitive at runtime:
Enum parameters have a "relaxed" type, so you can either pass in enum constants or their literal value:
296
+
297
+
```ruby
298
+
# Using the enum constants preserves the tagged type information:
299
+
openai.chat.completions.create(
300
+
reasoning_effort:OpenAI::ReasoningEffort::LOW,
301
+
# …
302
+
)
303
+
304
+
# Literal values is also permissible:
305
+
openai.chat.completions.create(
306
+
reasoning_effort::low,
307
+
# …
308
+
)
309
+
```
310
+
266
311
## Versioning
267
312
268
313
This package follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions. As the library is in initial development and has a major version of `0`, APIs may change at any time.
0 commit comments