Skip to content

Commit f3c721e

Browse files
docs: rewrite much of README.md for readability
1 parent 3fc0971 commit f3c721e

File tree

2 files changed

+123
-70
lines changed

2 files changed

+123
-70
lines changed

README.md

Lines changed: 115 additions & 70 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# OpenAI Ruby API library
22

3-
The OpenAI Ruby library provides convenient access to the OpenAI REST API from any Ruby 3.2.0+ application.
3+
The OpenAI Ruby library provides convenient access to the OpenAI REST API from any Ruby 3.2.0+ application. It ships with comprehensive types & docstrings in Yard, RBS, and RBI – [see below](https://github.com/openai/openai-ruby#Sorbet) for usage with Sorbet. The standard library's `net/http` is used as the HTTP transport, with connection pooling via the `connection_pool` gem.
44

55
## Documentation
66

@@ -40,17 +40,19 @@ chat_completion = openai.chat.completions.create(
4040
puts(chat_completion)
4141
```
4242

43-
## Sorbet
44-
45-
This library is written with [Sorbet type definitions](https://sorbet.org/docs/rbi). However, there is no runtime dependency on the `sorbet-runtime`.
43+
### Streaming
4644

47-
When using sorbet, it is recommended to use model classes as below. This provides stronger type checking and tooling integration.
45+
We provide support for streaming responses using Server-Sent Events (SSE).
4846

4947
```ruby
50-
openai.chat.completions.create(
51-
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(role: "user", content: "Say this is a test")],
48+
stream = openai.chat.completions.stream_raw(
49+
messages: [{role: "user", content: "Say this is a test"}],
5250
model: :"gpt-4.1"
5351
)
52+
53+
stream.each do |completion|
54+
puts(completion)
55+
end
5456
```
5557

5658
### Pagination
@@ -72,49 +74,54 @@ page.auto_paging_each do |job|
7274
end
7375
```
7476

75-
### Streaming
76-
77-
We provide support for streaming responses using Server-Sent Events (SSE).
77+
Alternatively, you can use the `#next_page?` and `#next_page` methods for more granular control working with pages.
7878

7979
**coming soon:** `openai.chat.completions.stream` will soon come with Python SDK style higher level streaming responses support.
8080

8181
```ruby
82-
stream = openai.chat.completions.stream_raw(
83-
messages: [{role: "user", content: "Say this is a test"}],
84-
model: :"gpt-4.1"
85-
)
86-
87-
stream.each do |completion|
88-
print(completion.choices.first.delta.content)
82+
if page.next_page?
83+
new_page = page.next_page
84+
puts(new_page.data[0].id)
8985
end
9086
```
9187

9288
### File uploads
9389

94-
Request parameters that correspond to file uploads can be passed as `StringIO`, or a [`Pathname`](https://rubyapi.org/3.2/o/pathname) instance.
90+
Request parameters that correspond to file uploads can be passed as raw contents, a [`Pathname`](https://rubyapi.org/3.2/o/pathname) instance, [`StringIO`](https://rubyapi.org/3.2/o/stringio), or more.
9591

9692
```ruby
9793
require "pathname"
9894

99-
# using `Pathname`, the file will be lazily read, without reading everything in to memory
95+
# Use `Pathname` to send the filename and/or avoid paging a large file into memory:
10096
file_object = openai.files.create(file: Pathname("input.jsonl"), purpose: "fine-tune")
10197

102-
file = File.read("input.jsonl")
103-
# using `StringIO`, useful if you already have the data in memory
104-
file_object = openai.files.create(file: StringIO.new(file), purpose: "fine-tune")
98+
# Alternatively, pass file contents or a `StringIO` directly:
99+
file_object = openai.files.create(file: File.read("input.jsonl"), purpose: "fine-tune")
100+
101+
# Or, to control the filename and/or content type:
102+
file = OpenAI::FilePart.new(File.read("input.jsonl"), filename: "input.jsonl", content_type: "")
103+
file_object = openai.files.create(file: file, purpose: "fine-tune")
105104

106105
puts(file_object.id)
107106
```
108107

109-
### Errors
108+
Note that you can also pass a raw `IO` descriptor, but this disables retries, as the library can't be sure if the descriptor is a file or pipe (which cannot be rewound).
109+
110+
### Handling errors
110111

111112
When the library is unable to connect to the API, or if the API returns a non-success status code (i.e., 4xx or 5xx response), a subclass of `OpenAI::Errors::APIError` will be thrown:
112113

113114
```ruby
114115
begin
115116
job = openai.fine_tuning.jobs.create(model: :"babbage-002", training_file: "file-abc123")
116-
rescue OpenAI::Errors::APIError => e
117-
puts(e.status) # 400
117+
rescue OpenAI::Errors::APIConnectionError => e
118+
puts("The server could not be reached")
119+
puts(e.cause) # an underlying Exception, likely raised within `net/http`
120+
rescue OpenAI::Errors::RateLimitError => e
121+
puts("A 429 status code was received; we should back off a bit.")
122+
rescue OpenAI::Errors::APIStatusError => e
123+
puts("Another non-200-range status code was received")
124+
puts(e.status)
118125
end
119126
```
120127

@@ -158,11 +165,7 @@ openai.chat.completions.create(
158165

159166
### Timeouts
160167

161-
By default, requests will time out after 600 seconds.
162-
163-
Timeouts are applied separately to the initial connection and the overall request time, so in some cases a request could wait 2\*timeout seconds before it fails.
164-
165-
You can use the `timeout` option to configure or disable this:
168+
By default, requests will time out after 600 seconds. You can use the timeout option to configure or disable this:
166169

167170
```ruby
168171
# Configure the default for all requests:
@@ -178,91 +181,133 @@ openai.chat.completions.create(
178181
)
179182
```
180183

181-
## Model DSL
184+
On timeout, `OpenAI::Errors::APITimeoutError` is raised.
182185

183-
This library uses a simple DSL to represent request parameters and response shapes in `lib/openai/models`.
186+
Note that requests that time out are retried by default.
184187

185-
With the right [editor plugins](https://shopify.github.io/ruby-lsp), you can ctrl-click on elements of the DSL to navigate around and explore the library.
188+
## Advanced concepts
186189

187-
In all places where a `BaseModel` type is specified, vanilla Ruby `Hash` can also be used. For example, the following are interchangeable as arguments:
190+
### BaseModel
188191

189-
```ruby
190-
# This has tooling readability, for auto-completion, static analysis, and goto definition with supported language services
191-
params = OpenAI::Models::Chat::CompletionCreateParams.new(
192-
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(role: "user", content: "Say this is a test")],
193-
model: :"gpt-4.1"
194-
)
192+
All parameter and response objects inherit from `OpenAI::Internal::Type::BaseModel`, which provides several conveniences, including:
195193

196-
# This also works
197-
params = {
198-
messages: [{role: "user", content: "Say this is a test"}],
199-
model: :"gpt-4.1"
200-
}
201-
```
194+
1. All fields, including unknown ones, are accessible with `obj[:prop]` syntax, and can be destructured with `obj => {prop: prop}` or pattern-matching syntax.
202195

203-
## Editor support
196+
2. Structural equivalence for equality; if two API calls return the same values, comparing the responses with == will return true.
204197

205-
A combination of [Shopify LSP](https://shopify.github.io/ruby-lsp) and [Solargraph](https://solargraph.org/) is recommended for non-[Sorbet](https://sorbet.org) users. The former is especially good at go to definition, while the latter has much better auto-completion support.
198+
3. Both instances and the classes themselves can be pretty-printed.
206199

207-
## Advanced concepts
200+
4. Helpers such as `#to_h`, `#deep_to_h`, `#to_json`, and `#to_yaml`.
201+
202+
### Making custom or undocumented requests
203+
204+
#### Undocumented properties
208205

209-
### Making custom/undocumented requests
206+
You can send undocumented parameters to any endpoint, and read undocumented response properties, like so:
207+
208+
Note: the `extra_` parameters of the same name overrides the documented parameters.
209+
210+
```ruby
211+
chat_completion =
212+
openai.chat.completions.create(
213+
messages: [{role: "user", content: "How can I get the name of the current day in JavaScript?"}],
214+
model: :"gpt-4.1",
215+
request_options: {
216+
extra_query: {my_query_parameter: value},
217+
extra_body: {my_body_parameter: value},
218+
extra_headers: {"my-header": value}
219+
}
220+
)
221+
222+
puts(chat_completion[:my_undocumented_property])
223+
```
210224

211225
#### Undocumented request params
212226

213-
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` under the `request_options:` parameter when making a requests as seen in examples above.
227+
If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` under the `request_options:` parameter when making a request as seen in examples above.
214228

215229
#### Undocumented endpoints
216230

217-
To make requests to undocumented endpoints, you can make requests using `client.request`. Options on the client will be respected (such as retries) when making this request.
231+
To make requests to undocumented endpoints while retaining the benefit of auth, retries, and so on, you can make requests using `client.request`, like so:
218232

219233
```ruby
220234
response = client.request(
221235
method: :post,
222236
path: '/undocumented/endpoint',
223237
query: {"dog": "woof"},
224238
headers: {"useful-header": "interesting-value"},
225-
body: {"he": "llo"},
239+
body: {"hello": "world"}
226240
)
227241
```
228242

229243
### Concurrency & connection pooling
230244

231-
The `OpenAI::Client` instances are thread-safe, and should be re-used across multiple threads. By default, each `Client` have their own HTTP connection pool, with a maximum number of connections equal to thread count.
245+
The `OpenAI::Client` instances are threadsafe, but only are fork-safe when there are no in-flight HTTP requests.
232246

233-
When the maximum number of connections has been checked out from the connection pool, the `Client` will wait for an in use connection to become available. The queue time for this mechanism is accounted for by the per-request timeout.
247+
Each instance of `OpenAI::Client` has its own HTTP connection pool with a default size of 99. As such, we recommend instantiating the client once per application in most settings.
234248

235-
Unless otherwise specified, other classes in the SDK do not have locks protecting their underlying data structure.
249+
When all available connections from the pool are checked out, requests wait for a new connection to become available, with queue time counting towards the request timeout.
236250

237-
Currently, `OpenAI::Client` instances are only fork-safe if there are no in-flight HTTP requests.
238-
239-
### Sorbet
251+
Unless otherwise specified, other classes in the SDK do not have locks protecting their underlying data structure.
240252

241-
#### Enums
253+
## Sorbet
242254

243-
Sorbet's typed enums require sub-classing of the [`T::Enum` class](https://sorbet.org/docs/tenum) from the `sorbet-runtime` gem.
255+
This library provides comprehensive [RBI](https://sorbet.org/docs/rbi) definitions, and has no dependency on sorbet-runtime.
244256

245-
Since this library does not depend on `sorbet-runtime`, it uses a [`T.all` intersection type](https://sorbet.org/docs/intersection-types) with a ruby primitive type to construct a "tagged alias" instead.
257+
You can provide typesafe request parameters like so:
246258

247259
```ruby
248-
module OpenAI::ChatModel
249-
# This alias aids language service driven navigation.
250-
TaggedSymbol = T.type_alias { T.all(Symbol, OpenAI::ChatModel) }
251-
end
260+
openai.chat.completions.create(
261+
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(role: "user", content: "Say this is a test")],
262+
model: :"gpt-4.1"
263+
)
252264
```
253265

254-
#### Argument passing trick
255-
256-
It is possible to pass a compatible model / parameter class to a method that expects keyword arguments by using the `**` splat operator.
266+
Or, equivalently:
257267

258268
```ruby
259-
params = OpenAI::Models::Chat::CompletionCreateParams.new(
269+
# Hashes work, but are not typesafe:
270+
openai.chat.completions.create(
271+
messages: [{role: "user", content: "Say this is a test"}],
272+
model: :"gpt-4.1"
273+
)
274+
275+
# You can also splat a full Params class:
276+
params = OpenAI::Chat::CompletionCreateParams.new(
260277
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(role: "user", content: "Say this is a test")],
261278
model: :"gpt-4.1"
262279
)
263280
openai.chat.completions.create(**params)
264281
```
265282

283+
### Enums
284+
285+
Since this library does not depend on `sorbet-runtime`, it cannot provide [`T::Enum`](https://sorbet.org/docs/tenum) instances. Instead, we provide "tagged symbols" instead, which is always a primitive at runtime:
286+
287+
```ruby
288+
# :low
289+
puts(OpenAI::ReasoningEffort::LOW)
290+
291+
# Revealed type: `T.all(OpenAI::ReasoningEffort, Symbol)`
292+
T.reveal_type(OpenAI::ReasoningEffort::LOW)
293+
```
294+
295+
Enum parameters have a "relaxed" type, so you can either pass in enum constants or their literal value:
296+
297+
```ruby
298+
# Using the enum constants preserves the tagged type information:
299+
openai.chat.completions.create(
300+
reasoning_effort: OpenAI::ReasoningEffort::LOW,
301+
#
302+
)
303+
304+
# Literal values is also permissible:
305+
openai.chat.completions.create(
306+
reasoning_effort: :low,
307+
#
308+
)
309+
```
310+
266311
## Versioning
267312

268313
This package follows [SemVer](https://semver.org/spec/v2.0.0.html) conventions. As the library is in initial development and has a major version of `0`, APIs may change at any time.

lib/openai/internal/type/base_model.rb

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -386,6 +386,14 @@ def deep_to_h = self.class.recursively_to_h(@data, convert: false)
386386
# @param keys [Array<Symbol>, nil]
387387
#
388388
# @return [Hash{Symbol=>Object}]
389+
#
390+
# @example
391+
# # `comparison_filter` is a `OpenAI::ComparisonFilter`
392+
# comparison_filter => {
393+
# key: key,
394+
# type: type,
395+
# value: value
396+
# }
389397
def deconstruct_keys(keys)
390398
(keys || self.class.known_fields.keys)
391399
.filter_map do |k|

0 commit comments

Comments
 (0)