Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.10.0"
".": "0.11.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 109
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-ef4ecb19eb61e24c49d77fef769ee243e5279bc0bdbaee8d0f8dba4da8722559.yml
openapi_spec_hash: 1b8a9767c9f04e6865b06c41948cdc24
config_hash: fd2af1d5eff0995bb7dc02ac9a34851d
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-a473967d1766dc155994d932fbc4a5bcbd1c140a37c20d0a4065e1bf0640536d.yml
openapi_spec_hash: 67cdc62b0d6c8b1de29b7dc54b265749
config_hash: e74d6791681e3af1b548748ff47a22c2
20 changes: 20 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,25 @@
# Changelog

## 0.11.0 (2025-06-26)

Full Changelog: [v0.10.0...v0.11.0](https://github.com/openai/openai-ruby/compare/v0.10.0...v0.11.0)

### Features

* **api:** webhook and deep research support ([6228400](https://github.com/openai/openai-ruby/commit/6228400e19aadefc5f87e24b3c104fc0b44d3cee))


### Bug Fixes

* **ci:** release-doctor — report correct token name ([c12c991](https://github.com/openai/openai-ruby/commit/c12c9911beaeb8b1c72d7c5cc5f14dcb9cd5452e))


### Chores

* **api:** remove unsupported property ([1073c3a](https://github.com/openai/openai-ruby/commit/1073c3a6059f2d1e1ef92937326699e0240503e5))
* **client:** throw specific errors ([0cf937e](https://github.com/openai/openai-ruby/commit/0cf937ea8abebc05e52a419e19e275a45b5da646))
* **docs:** update README to include links to docs on Webhooks ([2d8f23e](https://github.com/openai/openai-ruby/commit/2d8f23ecb245c88f3f082f93eb906af857d64c7d))

## 0.10.0 (2025-06-23)

Full Changelog: [v0.9.0...v0.10.0](https://github.com/openai/openai-ruby/compare/v0.9.0...v0.10.0)
Expand Down
5 changes: 4 additions & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ GIT
PATH
remote: .
specs:
openai (0.10.0)
openai (0.11.0)
connection_pool

GEM
Expand Down Expand Up @@ -54,6 +54,7 @@ GEM
csv (3.3.4)
drb (2.2.1)
erubi (1.13.1)
ffi (1.17.2-arm64-darwin)
ffi (1.17.2-x86_64-linux-gnu)
fiber-annotation (0.2.0)
fiber-local (1.1.0)
Expand Down Expand Up @@ -124,6 +125,7 @@ GEM
sorbet (0.5.12067)
sorbet-static (= 0.5.12067)
sorbet-runtime (0.5.12067)
sorbet-static (0.5.12067-universal-darwin)
sorbet-static (0.5.12067-x86_64-linux)
sorbet-static-and-runtime (0.5.12067)
sorbet (= 0.5.12067)
Expand Down Expand Up @@ -185,6 +187,7 @@ GEM
yard

PLATFORMS
arm64-darwin-24
x86_64-linux

DEPENDENCIES
Expand Down
80 changes: 79 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
<!-- x-release-please-start-version -->

```ruby
gem "openai", "~> 0.10.0"
gem "openai", "~> 0.11.0"
```

<!-- x-release-please-end -->
Expand Down Expand Up @@ -112,6 +112,84 @@ puts(edited.data.first)

Note that you can also pass a raw `IO` descriptor, but this disables retries, as the library can't be sure if the descriptor is a file or pipe (which cannot be rewound).

## Webhook Verification

Verifying webhook signatures is _optional but encouraged_.

For more information about webhooks, see [the API docs](https://platform.openai.com/docs/guides/webhooks).

### Parsing webhook payloads

For most use cases, you will likely want to verify the webhook and parse the payload at the same time. To achieve this, we provide the method `client.webhooks.unwrap`, which parses a webhook request and verifies that it was sent by OpenAI. This method will raise an error if the signature is invalid.

Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). The `unwrap` method will parse this JSON for you into an event object after verifying the webhook was sent from OpenAI.

```ruby
require 'sinatra'
require 'openai'

# Set up the client with webhook secret from environment variable
client = OpenAI::Client.new(webhook_secret: ENV['OPENAI_WEBHOOK_SECRET'])

post '/webhook' do
request_body = request.body.read

begin
event = client.webhooks.unwrap(request_body, request.env)

case event.type
when 'response.completed'
puts "Response completed: #{event.data}"
when 'response.failed'
puts "Response failed: #{event.data}"
else
puts "Unhandled event type: #{event.type}"
end

status 200
'ok'
rescue StandardError => e
puts "Invalid signature: #{e}"
status 400
'Invalid signature'
end
end
```

### Verifying webhook payloads directly

In some cases, you may want to verify the webhook separately from parsing the payload. If you prefer to handle these steps separately, we provide the method `client.webhooks.verify_signature` to _only verify_ the signature of a webhook request. Like `unwrap`, this method will raise an error if the signature is invalid.

Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). You will then need to parse the body after verifying the signature.

```ruby
require 'sinatra'
require 'json'
require 'openai'

# Set up the client with webhook secret from environment variable
client = OpenAI::Client.new(webhook_secret: ENV['OPENAI_WEBHOOK_SECRET'])

post '/webhook' do
request_body = request.body.read

begin
client.webhooks.verify_signature(request_body, request.env)

# Parse the body after verification
event = JSON.parse(request_body)
puts "Verified event: #{event}"

status 200
'ok'
rescue StandardError => e
puts "Invalid signature: #{e}"
status 400
'Invalid signature'
end
end
```

### [Structured outputs](https://platform.openai.com/docs/guides/structured-outputs) and function calling

This SDK ships with helpers in `OpenAI::BaseModel`, `OpenAI::ArrayOf`, `OpenAI::EnumOf`, and `OpenAI::UnionOf` to help you define the supported JSON schemas used in making structured outputs and function calling requests.
Expand Down
2 changes: 1 addition & 1 deletion bin/check-release-environment
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ if [ -z "${STAINLESS_API_KEY}" ]; then
fi

if [ -z "${GEM_HOST_API_KEY}" ]; then
errors+=("The OPENAI_GEM_HOST_API_KEY secret has not been set. Please set it in either this repository's secrets or your organization secrets")
errors+=("The GEM_HOST_API_KEY secret has not been set. Please set it in either this repository's secrets or your organization secrets")
fi

lenErrors=${#errors[@]}
Expand Down
18 changes: 18 additions & 0 deletions lib/openai.rb
Original file line number Diff line number Diff line change
Expand Up @@ -441,6 +441,7 @@
require_relative "openai/models/responses/response_web_search_call_searching_event"
require_relative "openai/models/responses/tool"
require_relative "openai/models/responses/tool_choice_function"
require_relative "openai/models/responses/tool_choice_mcp"
require_relative "openai/models/responses/tool_choice_options"
require_relative "openai/models/responses/tool_choice_types"
require_relative "openai/models/responses/web_search_tool"
Expand Down Expand Up @@ -477,6 +478,22 @@
require_relative "openai/models/vector_store_search_params"
require_relative "openai/models/vector_store_search_response"
require_relative "openai/models/vector_store_update_params"
require_relative "openai/models/webhooks/batch_cancelled_webhook_event"
require_relative "openai/models/webhooks/batch_completed_webhook_event"
require_relative "openai/models/webhooks/batch_expired_webhook_event"
require_relative "openai/models/webhooks/batch_failed_webhook_event"
require_relative "openai/models/webhooks/eval_run_canceled_webhook_event"
require_relative "openai/models/webhooks/eval_run_failed_webhook_event"
require_relative "openai/models/webhooks/eval_run_succeeded_webhook_event"
require_relative "openai/models/webhooks/fine_tuning_job_cancelled_webhook_event"
require_relative "openai/models/webhooks/fine_tuning_job_failed_webhook_event"
require_relative "openai/models/webhooks/fine_tuning_job_succeeded_webhook_event"
require_relative "openai/models/webhooks/response_cancelled_webhook_event"
require_relative "openai/models/webhooks/response_completed_webhook_event"
require_relative "openai/models/webhooks/response_failed_webhook_event"
require_relative "openai/models/webhooks/response_incomplete_webhook_event"
require_relative "openai/models/webhooks/unwrap_webhook_event"
require_relative "openai/models/webhooks/webhook_unwrap_params"
require_relative "openai/models"
require_relative "openai/resources/audio"
require_relative "openai/resources/audio/speech"
Expand Down Expand Up @@ -521,3 +538,4 @@
require_relative "openai/resources/vector_stores"
require_relative "openai/resources/vector_stores/file_batches"
require_relative "openai/resources/vector_stores/files"
require_relative "openai/resources/webhooks"
11 changes: 11 additions & 0 deletions lib/openai/client.rb
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,9 @@ class Client < OpenAI::Internal::Transport::BaseClient
# @return [String, nil]
attr_reader :project

# @return [String, nil]
attr_reader :webhook_secret

# @return [OpenAI::Resources::Completions]
attr_reader :completions

Expand Down Expand Up @@ -57,6 +60,9 @@ class Client < OpenAI::Internal::Transport::BaseClient
# @return [OpenAI::Resources::VectorStores]
attr_reader :vector_stores

# @return [OpenAI::Resources::Webhooks]
attr_reader :webhooks

# @return [OpenAI::Resources::Beta]
attr_reader :beta

Expand Down Expand Up @@ -92,6 +98,8 @@ class Client < OpenAI::Internal::Transport::BaseClient
#
# @param project [String, nil] Defaults to `ENV["OPENAI_PROJECT_ID"]`
#
# @param webhook_secret [String, nil] Defaults to `ENV["OPENAI_WEBHOOK_SECRET"]`
#
# @param base_url [String, nil] Override the default base URL for the API, e.g.,
# `"https://api.example.com/v2/"`. Defaults to `ENV["OPENAI_BASE_URL"]`
#
Expand All @@ -106,6 +114,7 @@ def initialize(
api_key: ENV["OPENAI_API_KEY"],
organization: ENV["OPENAI_ORG_ID"],
project: ENV["OPENAI_PROJECT_ID"],
webhook_secret: ENV["OPENAI_WEBHOOK_SECRET"],
base_url: ENV["OPENAI_BASE_URL"],
max_retries: self.class::DEFAULT_MAX_RETRIES,
timeout: self.class::DEFAULT_TIMEOUT_IN_SECONDS,
Expand All @@ -124,6 +133,7 @@ def initialize(
}

@api_key = api_key.to_s
@webhook_secret = webhook_secret&.to_s

super(
base_url: base_url,
Expand All @@ -145,6 +155,7 @@ def initialize(
@fine_tuning = OpenAI::Resources::FineTuning.new(client: self)
@graders = OpenAI::Resources::Graders.new(client: self)
@vector_stores = OpenAI::Resources::VectorStores.new(client: self)
@webhooks = OpenAI::Resources::Webhooks.new(client: self)
@beta = OpenAI::Resources::Beta.new(client: self)
@batches = OpenAI::Resources::Batches.new(client: self)
@uploads = OpenAI::Resources::Uploads.new(client: self)
Expand Down
3 changes: 3 additions & 0 deletions lib/openai/errors.rb
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,9 @@ class Error < StandardError
# @return [StandardError, nil]
end

class InvalidWebhookSignatureError < OpenAI::Errors::Error
end

class ConversionError < OpenAI::Errors::Error
# @return [StandardError, nil]
def cause = @cause.nil? ? super : @cause
Expand Down
2 changes: 2 additions & 0 deletions lib/openai/models.rb
Original file line number Diff line number Diff line change
Expand Up @@ -234,4 +234,6 @@ module OpenAI
VectorStoreSearchParams = OpenAI::Models::VectorStoreSearchParams

VectorStoreUpdateParams = OpenAI::Models::VectorStoreUpdateParams

Webhooks = OpenAI::Models::Webhooks
end
4 changes: 4 additions & 0 deletions lib/openai/models/all_models.rb
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,10 @@ module ResponsesOnlyModel
O1_PRO_2025_03_19 = :"o1-pro-2025-03-19"
O3_PRO = :"o3-pro"
O3_PRO_2025_06_10 = :"o3-pro-2025-06-10"
O3_DEEP_RESEARCH = :"o3-deep-research"
O3_DEEP_RESEARCH_2025_06_26 = :"o3-deep-research-2025-06-26"
O4_MINI_DEEP_RESEARCH = :"o4-mini-deep-research"
O4_MINI_DEEP_RESEARCH_2025_06_26 = :"o4-mini-deep-research-2025-06-26"
COMPUTER_USE_PREVIEW = :"computer-use-preview"
COMPUTER_USE_PREVIEW_2025_03_11 = :"computer-use-preview-2025-03-11"

Expand Down
63 changes: 32 additions & 31 deletions lib/openai/models/chat/chat_completion.rb
Original file line number Diff line number Diff line change
Expand Up @@ -39,23 +39,23 @@ class ChatCompletion < OpenAI::Internal::Type::BaseModel
required :object, const: :"chat.completion"

# @!attribute service_tier
# Specifies the latency tier to use for processing the request. This parameter is
# relevant for customers subscribed to the scale tier service:
#
# - If set to 'auto', and the Project is Scale tier enabled, the system will
# utilize scale tier credits until they are exhausted.
# - If set to 'auto', and the Project is not Scale tier enabled, the request will
# be processed using the default service tier with a lower uptime SLA and no
# latency guarantee.
# - If set to 'default', the request will be processed using the default service
# tier with a lower uptime SLA and no latency guarantee.
# - If set to 'flex', the request will be processed with the Flex Processing
# service tier.
# [Learn more](https://platform.openai.com/docs/guides/flex-processing).
# Specifies the processing type used for serving the request.
#
# - If set to 'auto', then the request will be processed with the service tier
# configured in the Project settings. Unless otherwise configured, the Project
# will use 'default'.
# - If set to 'default', then the requset will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
# tier. [Contact sales](https://openai.com/contact-sales) to learn more about
# Priority processing.
# - When not set, the default behavior is 'auto'.
#
# When this parameter is set, the response body will include the `service_tier`
# utilized.
# When the `service_tier` parameter is set, the response body will include the
# `service_tier` value based on the processing mode actually used to serve the
# request. This response value may be different from the value set in the
# parameter.
#
# @return [Symbol, OpenAI::Models::Chat::ChatCompletion::ServiceTier, nil]
optional :service_tier, enum: -> { OpenAI::Chat::ChatCompletion::ServiceTier }, nil?: true
Expand Down Expand Up @@ -90,7 +90,7 @@ class ChatCompletion < OpenAI::Internal::Type::BaseModel
#
# @param model [String] The model used for the chat completion.
#
# @param service_tier [Symbol, OpenAI::Models::Chat::ChatCompletion::ServiceTier, nil] Specifies the latency tier to use for processing the request. This parameter is
# @param service_tier [Symbol, OpenAI::Models::Chat::ChatCompletion::ServiceTier, nil] Specifies the processing type used for serving the request.
#
# @param system_fingerprint [String] This fingerprint represents the backend configuration that the model runs with.
#
Expand Down Expand Up @@ -188,23 +188,23 @@ class Logprobs < OpenAI::Internal::Type::BaseModel
end
end

# Specifies the latency tier to use for processing the request. This parameter is
# relevant for customers subscribed to the scale tier service:
#
# - If set to 'auto', and the Project is Scale tier enabled, the system will
# utilize scale tier credits until they are exhausted.
# - If set to 'auto', and the Project is not Scale tier enabled, the request will
# be processed using the default service tier with a lower uptime SLA and no
# latency guarantee.
# - If set to 'default', the request will be processed using the default service
# tier with a lower uptime SLA and no latency guarantee.
# - If set to 'flex', the request will be processed with the Flex Processing
# service tier.
# [Learn more](https://platform.openai.com/docs/guides/flex-processing).
# Specifies the processing type used for serving the request.
#
# - If set to 'auto', then the request will be processed with the service tier
# configured in the Project settings. Unless otherwise configured, the Project
# will use 'default'.
# - If set to 'default', then the requset will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
# tier. [Contact sales](https://openai.com/contact-sales) to learn more about
# Priority processing.
# - When not set, the default behavior is 'auto'.
#
# When this parameter is set, the response body will include the `service_tier`
# utilized.
# When the `service_tier` parameter is set, the response body will include the
# `service_tier` value based on the processing mode actually used to serve the
# request. This response value may be different from the value set in the
# parameter.
#
# @see OpenAI::Models::Chat::ChatCompletion#service_tier
module ServiceTier
Expand All @@ -214,6 +214,7 @@ module ServiceTier
DEFAULT = :default
FLEX = :flex
SCALE = :scale
PRIORITY = :priority

# @!method self.values
# @return [Array<Symbol>]
Expand Down
Loading