Skip to content

Commit 6228400

Browse files
feat(api): webhook and deep research support
1 parent c12c991 commit 6228400

File tree

106 files changed

+6546
-509
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

106 files changed

+6546
-509
lines changed

.stats.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
configured_endpoints: 109
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-ef4ecb19eb61e24c49d77fef769ee243e5279bc0bdbaee8d0f8dba4da8722559.yml
3-
openapi_spec_hash: 1b8a9767c9f04e6865b06c41948cdc24
4-
config_hash: cae2d1f187b5b9f8dfa00daa807da42a
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-cca460eaf5cc13e9d6e5293eb97aac53d66dc1385c691f74b768c97d165b6e8b.yml
3+
openapi_spec_hash: 9ec43d443b3dd58ca5aa87eb0a7eb49f
4+
config_hash: e74d6791681e3af1b548748ff47a22c2

Gemfile.lock

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,7 @@ GEM
5454
csv (3.3.4)
5555
drb (2.2.1)
5656
erubi (1.13.1)
57+
ffi (1.17.2-arm64-darwin)
5758
ffi (1.17.2-x86_64-linux-gnu)
5859
fiber-annotation (0.2.0)
5960
fiber-local (1.1.0)
@@ -124,6 +125,7 @@ GEM
124125
sorbet (0.5.12067)
125126
sorbet-static (= 0.5.12067)
126127
sorbet-runtime (0.5.12067)
128+
sorbet-static (0.5.12067-universal-darwin)
127129
sorbet-static (0.5.12067-x86_64-linux)
128130
sorbet-static-and-runtime (0.5.12067)
129131
sorbet (= 0.5.12067)
@@ -185,6 +187,7 @@ GEM
185187
yard
186188

187189
PLATFORMS
190+
arm64-darwin-24
188191
x86_64-linux
189192

190193
DEPENDENCIES

README.md

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -112,6 +112,82 @@ puts(edited.data.first)
112112

113113
Note that you can also pass a raw `IO` descriptor, but this disables retries, as the library can't be sure if the descriptor is a file or pipe (which cannot be rewound).
114114

115+
## Webhook Verification
116+
117+
Verifying webhook signatures is _optional but encouraged_.
118+
119+
### Parsing webhook payloads
120+
121+
For most use cases, you will likely want to verify the webhook and parse the payload at the same time. To achieve this, we provide the method `client.webhooks.unwrap`, which parses a webhook request and verifies that it was sent by OpenAI. This method will raise an error if the signature is invalid.
122+
123+
Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). The `unwrap` method will parse this JSON for you into an event object after verifying the webhook was sent from OpenAI.
124+
125+
```ruby
126+
require 'sinatra'
127+
require 'openai'
128+
129+
# Set up the client with webhook secret from environment variable
130+
client = OpenAI::Client.new(webhook_secret: ENV['OPENAI_WEBHOOK_SECRET'])
131+
132+
post '/webhook' do
133+
request_body = request.body.read
134+
135+
begin
136+
event = client.webhooks.unwrap(request_body, request.env)
137+
138+
case event.type
139+
when 'response.completed'
140+
puts "Response completed: #{event.data}"
141+
when 'response.failed'
142+
puts "Response failed: #{event.data}"
143+
else
144+
puts "Unhandled event type: #{event.type}"
145+
end
146+
147+
status 200
148+
'ok'
149+
rescue StandardError => e
150+
puts "Invalid signature: #{e}"
151+
status 400
152+
'Invalid signature'
153+
end
154+
end
155+
```
156+
157+
### Verifying webhook payloads directly
158+
159+
In some cases, you may want to verify the webhook separately from parsing the payload. If you prefer to handle these steps separately, we provide the method `client.webhooks.verify_signature` to _only verify_ the signature of a webhook request. Like `unwrap`, this method will raise an error if the signature is invalid.
160+
161+
Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). You will then need to parse the body after verifying the signature.
162+
163+
```ruby
164+
require 'sinatra'
165+
require 'json'
166+
require 'openai'
167+
168+
# Set up the client with webhook secret from environment variable
169+
client = OpenAI::Client.new(webhook_secret: ENV['OPENAI_WEBHOOK_SECRET'])
170+
171+
post '/webhook' do
172+
request_body = request.body.read
173+
174+
begin
175+
client.webhooks.verify_signature(request_body, request.env)
176+
177+
# Parse the body after verification
178+
event = JSON.parse(request_body)
179+
puts "Verified event: #{event}"
180+
181+
status 200
182+
'ok'
183+
rescue StandardError => e
184+
puts "Invalid signature: #{e}"
185+
status 400
186+
'Invalid signature'
187+
end
188+
end
189+
```
190+
115191
### [Structured outputs](https://platform.openai.com/docs/guides/structured-outputs) and function calling
116192

117193
This SDK ships with helpers in `OpenAI::BaseModel`, `OpenAI::ArrayOf`, `OpenAI::EnumOf`, and `OpenAI::UnionOf` to help you define the supported JSON schemas used in making structured outputs and function calling requests.

lib/openai.rb

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -441,6 +441,7 @@
441441
require_relative "openai/models/responses/response_web_search_call_searching_event"
442442
require_relative "openai/models/responses/tool"
443443
require_relative "openai/models/responses/tool_choice_function"
444+
require_relative "openai/models/responses/tool_choice_mcp"
444445
require_relative "openai/models/responses/tool_choice_options"
445446
require_relative "openai/models/responses/tool_choice_types"
446447
require_relative "openai/models/responses/web_search_tool"
@@ -477,6 +478,22 @@
477478
require_relative "openai/models/vector_store_search_params"
478479
require_relative "openai/models/vector_store_search_response"
479480
require_relative "openai/models/vector_store_update_params"
481+
require_relative "openai/models/webhooks/batch_cancelled_webhook_event"
482+
require_relative "openai/models/webhooks/batch_completed_webhook_event"
483+
require_relative "openai/models/webhooks/batch_expired_webhook_event"
484+
require_relative "openai/models/webhooks/batch_failed_webhook_event"
485+
require_relative "openai/models/webhooks/eval_run_canceled_webhook_event"
486+
require_relative "openai/models/webhooks/eval_run_failed_webhook_event"
487+
require_relative "openai/models/webhooks/eval_run_succeeded_webhook_event"
488+
require_relative "openai/models/webhooks/fine_tuning_job_cancelled_webhook_event"
489+
require_relative "openai/models/webhooks/fine_tuning_job_failed_webhook_event"
490+
require_relative "openai/models/webhooks/fine_tuning_job_succeeded_webhook_event"
491+
require_relative "openai/models/webhooks/response_cancelled_webhook_event"
492+
require_relative "openai/models/webhooks/response_completed_webhook_event"
493+
require_relative "openai/models/webhooks/response_failed_webhook_event"
494+
require_relative "openai/models/webhooks/response_incomplete_webhook_event"
495+
require_relative "openai/models/webhooks/unwrap_webhook_event"
496+
require_relative "openai/models/webhooks/webhook_unwrap_params"
480497
require_relative "openai/models"
481498
require_relative "openai/resources/audio"
482499
require_relative "openai/resources/audio/speech"
@@ -521,3 +538,4 @@
521538
require_relative "openai/resources/vector_stores"
522539
require_relative "openai/resources/vector_stores/file_batches"
523540
require_relative "openai/resources/vector_stores/files"
541+
require_relative "openai/resources/webhooks"

lib/openai/client.rb

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,9 @@ class Client < OpenAI::Internal::Transport::BaseClient
2424
# @return [String, nil]
2525
attr_reader :project
2626

27+
# @return [String, nil]
28+
attr_reader :webhook_secret
29+
2730
# @return [OpenAI::Resources::Completions]
2831
attr_reader :completions
2932

@@ -57,6 +60,9 @@ class Client < OpenAI::Internal::Transport::BaseClient
5760
# @return [OpenAI::Resources::VectorStores]
5861
attr_reader :vector_stores
5962

63+
# @return [OpenAI::Resources::Webhooks]
64+
attr_reader :webhooks
65+
6066
# @return [OpenAI::Resources::Beta]
6167
attr_reader :beta
6268

@@ -92,6 +98,8 @@ class Client < OpenAI::Internal::Transport::BaseClient
9298
#
9399
# @param project [String, nil] Defaults to `ENV["OPENAI_PROJECT_ID"]`
94100
#
101+
# @param webhook_secret [String, nil] Defaults to `ENV["OPENAI_WEBHOOK_SECRET"]`
102+
#
95103
# @param base_url [String, nil] Override the default base URL for the API, e.g.,
96104
# `"https://api.example.com/v2/"`. Defaults to `ENV["OPENAI_BASE_URL"]`
97105
#
@@ -106,6 +114,7 @@ def initialize(
106114
api_key: ENV["OPENAI_API_KEY"],
107115
organization: ENV["OPENAI_ORG_ID"],
108116
project: ENV["OPENAI_PROJECT_ID"],
117+
webhook_secret: ENV["OPENAI_WEBHOOK_SECRET"],
109118
base_url: ENV["OPENAI_BASE_URL"],
110119
max_retries: self.class::DEFAULT_MAX_RETRIES,
111120
timeout: self.class::DEFAULT_TIMEOUT_IN_SECONDS,
@@ -124,6 +133,7 @@ def initialize(
124133
}
125134

126135
@api_key = api_key.to_s
136+
@webhook_secret = webhook_secret&.to_s
127137

128138
super(
129139
base_url: base_url,
@@ -145,6 +155,7 @@ def initialize(
145155
@fine_tuning = OpenAI::Resources::FineTuning.new(client: self)
146156
@graders = OpenAI::Resources::Graders.new(client: self)
147157
@vector_stores = OpenAI::Resources::VectorStores.new(client: self)
158+
@webhooks = OpenAI::Resources::Webhooks.new(client: self)
148159
@beta = OpenAI::Resources::Beta.new(client: self)
149160
@batches = OpenAI::Resources::Batches.new(client: self)
150161
@uploads = OpenAI::Resources::Uploads.new(client: self)

lib/openai/models.rb

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -234,4 +234,6 @@ module OpenAI
234234
VectorStoreSearchParams = OpenAI::Models::VectorStoreSearchParams
235235

236236
VectorStoreUpdateParams = OpenAI::Models::VectorStoreUpdateParams
237+
238+
Webhooks = OpenAI::Models::Webhooks
237239
end

lib/openai/models/all_models.rb

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,10 @@ module ResponsesOnlyModel
1818
O1_PRO_2025_03_19 = :"o1-pro-2025-03-19"
1919
O3_PRO = :"o3-pro"
2020
O3_PRO_2025_06_10 = :"o3-pro-2025-06-10"
21+
O3_DEEP_RESEARCH = :"o3-deep-research"
22+
O3_DEEP_RESEARCH_2025_06_26 = :"o3-deep-research-2025-06-26"
23+
O4_MINI_DEEP_RESEARCH = :"o4-mini-deep-research"
24+
O4_MINI_DEEP_RESEARCH_2025_06_26 = :"o4-mini-deep-research-2025-06-26"
2125
COMPUTER_USE_PREVIEW = :"computer-use-preview"
2226
COMPUTER_USE_PREVIEW_2025_03_11 = :"computer-use-preview-2025-03-11"
2327

lib/openai/models/chat/chat_completion.rb

Lines changed: 32 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -39,23 +39,23 @@ class ChatCompletion < OpenAI::Internal::Type::BaseModel
3939
required :object, const: :"chat.completion"
4040

4141
# @!attribute service_tier
42-
# Specifies the latency tier to use for processing the request. This parameter is
43-
# relevant for customers subscribed to the scale tier service:
44-
#
45-
# - If set to 'auto', and the Project is Scale tier enabled, the system will
46-
# utilize scale tier credits until they are exhausted.
47-
# - If set to 'auto', and the Project is not Scale tier enabled, the request will
48-
# be processed using the default service tier with a lower uptime SLA and no
49-
# latency guarantee.
50-
# - If set to 'default', the request will be processed using the default service
51-
# tier with a lower uptime SLA and no latency guarantee.
52-
# - If set to 'flex', the request will be processed with the Flex Processing
53-
# service tier.
54-
# [Learn more](https://platform.openai.com/docs/guides/flex-processing).
42+
# Specifies the processing type used for serving the request.
43+
#
44+
# - If set to 'auto', then the request will be processed with the service tier
45+
# configured in the Project settings. Unless otherwise configured, the Project
46+
# will use 'default'.
47+
# - If set to 'default', then the requset will be processed with the standard
48+
# pricing and performance for the selected model.
49+
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
50+
# 'priority', then the request will be processed with the corresponding service
51+
# tier. [Contact sales](https://openai.com/contact-sales) to learn more about
52+
# Priority processing.
5553
# - When not set, the default behavior is 'auto'.
5654
#
57-
# When this parameter is set, the response body will include the `service_tier`
58-
# utilized.
55+
# When the `service_tier` parameter is set, the response body will include the
56+
# `service_tier` value based on the processing mode actually used to serve the
57+
# request. This response value may be different from the value set in the
58+
# parameter.
5959
#
6060
# @return [Symbol, OpenAI::Models::Chat::ChatCompletion::ServiceTier, nil]
6161
optional :service_tier, enum: -> { OpenAI::Chat::ChatCompletion::ServiceTier }, nil?: true
@@ -90,7 +90,7 @@ class ChatCompletion < OpenAI::Internal::Type::BaseModel
9090
#
9191
# @param model [String] The model used for the chat completion.
9292
#
93-
# @param service_tier [Symbol, OpenAI::Models::Chat::ChatCompletion::ServiceTier, nil] Specifies the latency tier to use for processing the request. This parameter is
93+
# @param service_tier [Symbol, OpenAI::Models::Chat::ChatCompletion::ServiceTier, nil] Specifies the processing type used for serving the request.
9494
#
9595
# @param system_fingerprint [String] This fingerprint represents the backend configuration that the model runs with.
9696
#
@@ -188,23 +188,23 @@ class Logprobs < OpenAI::Internal::Type::BaseModel
188188
end
189189
end
190190

191-
# Specifies the latency tier to use for processing the request. This parameter is
192-
# relevant for customers subscribed to the scale tier service:
193-
#
194-
# - If set to 'auto', and the Project is Scale tier enabled, the system will
195-
# utilize scale tier credits until they are exhausted.
196-
# - If set to 'auto', and the Project is not Scale tier enabled, the request will
197-
# be processed using the default service tier with a lower uptime SLA and no
198-
# latency guarantee.
199-
# - If set to 'default', the request will be processed using the default service
200-
# tier with a lower uptime SLA and no latency guarantee.
201-
# - If set to 'flex', the request will be processed with the Flex Processing
202-
# service tier.
203-
# [Learn more](https://platform.openai.com/docs/guides/flex-processing).
191+
# Specifies the processing type used for serving the request.
192+
#
193+
# - If set to 'auto', then the request will be processed with the service tier
194+
# configured in the Project settings. Unless otherwise configured, the Project
195+
# will use 'default'.
196+
# - If set to 'default', then the requset will be processed with the standard
197+
# pricing and performance for the selected model.
198+
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
199+
# 'priority', then the request will be processed with the corresponding service
200+
# tier. [Contact sales](https://openai.com/contact-sales) to learn more about
201+
# Priority processing.
204202
# - When not set, the default behavior is 'auto'.
205203
#
206-
# When this parameter is set, the response body will include the `service_tier`
207-
# utilized.
204+
# When the `service_tier` parameter is set, the response body will include the
205+
# `service_tier` value based on the processing mode actually used to serve the
206+
# request. This response value may be different from the value set in the
207+
# parameter.
208208
#
209209
# @see OpenAI::Models::Chat::ChatCompletion#service_tier
210210
module ServiceTier
@@ -214,6 +214,7 @@ module ServiceTier
214214
DEFAULT = :default
215215
FLEX = :flex
216216
SCALE = :scale
217+
PRIORITY = :priority
217218

218219
# @!method self.values
219220
# @return [Array<Symbol>]

0 commit comments

Comments
 (0)