You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+76Lines changed: 76 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -112,6 +112,82 @@ puts(edited.data.first)
112
112
113
113
Note that you can also pass a raw `IO` descriptor, but this disables retries, as the library can't be sure if the descriptor is a file or pipe (which cannot be rewound).
114
114
115
+
## Webhook Verification
116
+
117
+
Verifying webhook signatures is _optional but encouraged_.
118
+
119
+
### Parsing webhook payloads
120
+
121
+
For most use cases, you will likely want to verify the webhook and parse the payload at the same time. To achieve this, we provide the method `client.webhooks.unwrap`, which parses a webhook request and verifies that it was sent by OpenAI. This method will raise an error if the signature is invalid.
122
+
123
+
Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). The `unwrap` method will parse this JSON for you into an event object after verifying the webhook was sent from OpenAI.
124
+
125
+
```ruby
126
+
require'sinatra'
127
+
require'openai'
128
+
129
+
# Set up the client with webhook secret from environment variable
In some cases, you may want to verify the webhook separately from parsing the payload. If you prefer to handle these steps separately, we provide the method `client.webhooks.verify_signature` to _only verify_ the signature of a webhook request. Like `unwrap`, this method will raise an error if the signature is invalid.
160
+
161
+
Note that the `body` parameter must be the raw JSON string sent from the server (do not parse it first). You will then need to parse the body after verifying the signature.
162
+
163
+
```ruby
164
+
require'sinatra'
165
+
require'json'
166
+
require'openai'
167
+
168
+
# Set up the client with webhook secret from environment variable
### [Structured outputs](https://platform.openai.com/docs/guides/structured-outputs) and function calling
116
192
117
193
This SDK ships with helpers in `OpenAI::BaseModel`, `OpenAI::ArrayOf`, `OpenAI::EnumOf`, and `OpenAI::UnionOf` to help you define the supported JSON schemas used in making structured outputs and function calling requests.
@@ -90,7 +90,7 @@ class ChatCompletion < OpenAI::Internal::Type::BaseModel
90
90
#
91
91
# @param model [String] The model used for the chat completion.
92
92
#
93
-
# @param service_tier [Symbol, OpenAI::Models::Chat::ChatCompletion::ServiceTier, nil] Specifies the latency tier to use for processing the request. This parameter is
93
+
# @param service_tier [Symbol, OpenAI::Models::Chat::ChatCompletion::ServiceTier, nil] Specifies the processing type used for serving the request.
94
94
#
95
95
# @param system_fingerprint [String] This fingerprint represents the backend configuration that the model runs with.
96
96
#
@@ -188,23 +188,23 @@ class Logprobs < OpenAI::Internal::Type::BaseModel
188
188
end
189
189
end
190
190
191
-
# Specifies the latency tier to use for processing the request. This parameter is
192
-
# relevant for customers subscribed to the scale tier service:
193
-
#
194
-
# - If set to 'auto', and the Project is Scale tier enabled, the system will
195
-
# utilize scale tier credits until they are exhausted.
196
-
# - If set to 'auto', and the Project is not Scale tier enabled, the request will
197
-
# be processed using the default service tier with a lower uptime SLA and no
198
-
# latency guarantee.
199
-
# - If set to 'default', the request will be processed using the default service
200
-
# tier with a lower uptime SLA and no latency guarantee.
201
-
# - If set to 'flex', the request will be processed with the Flex Processing
0 commit comments