Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.23.2"
".": "0.23.3"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 118
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-94b1e3cb0bdc616ff0c2f267c33dadd95f133b1f64e647aab6c64afb292b2793.yml
openapi_spec_hash: 2395319ac9befd59b6536ae7f9564a05
config_hash: 930dac3aa861344867e4ac84f037b5df
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-d30ff992a48873c1466c49f3c01f2ec8933faebff23424748f8d056065b1bcef.yml
openapi_spec_hash: e933ec43b46f45c348adb78840e5808d
config_hash: bf45940f0a7805b4ec2017eecdd36893
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Changelog

## 0.23.3 (2025-09-15)

Full Changelog: [v0.23.2...v0.23.3](https://github.com/openai/openai-ruby/compare/v0.23.2...v0.23.3)

### Chores

* **api:** docs and spec refactoring ([81ccb86](https://github.com/openai/openai-ruby/commit/81ccb86c346e51a2b5d532a5997358aa86977572))

## 0.23.2 (2025-09-11)

Full Changelog: [v0.23.1...v0.23.2](https://github.com/openai/openai-ruby/compare/v0.23.1...v0.23.2)
Expand Down
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ GIT
PATH
remote: .
specs:
openai (0.23.2)
openai (0.23.3)
connection_pool

GEM
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
<!-- x-release-please-start-version -->

```ruby
gem "openai", "~> 0.23.2"
gem "openai", "~> 0.23.3"
```

<!-- x-release-please-end -->
Expand Down
9 changes: 6 additions & 3 deletions lib/openai/models/chat/completion_list_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,12 @@ class CompletionListParams < OpenAI::Internal::Type::BaseModel
optional :limit, Integer

# @!attribute metadata
# A list of metadata keys to filter the Chat Completions by. Example:
# Set of 16 key-value pairs that can be attached to an object. This can be useful
# for storing additional information about the object in a structured format, and
# querying for objects via API or the dashboard.
#
# `metadata[key1]=value1&metadata[key2]=value2`
# Keys are strings with a maximum length of 64 characters. Values are strings with
# a maximum length of 512 characters.
#
# @return [Hash{Symbol=>String}, nil]
optional :metadata, OpenAI::Internal::Type::HashOf[String], nil?: true
Expand All @@ -49,7 +52,7 @@ class CompletionListParams < OpenAI::Internal::Type::BaseModel
#
# @param limit [Integer] Number of Chat Completions to retrieve.
#
# @param metadata [Hash{Symbol=>String}, nil] A list of metadata keys to filter the Chat Completions by. Example:
# @param metadata [Hash{Symbol=>String}, nil] Set of 16 key-value pairs that can be attached to an object. This can be
#
# @param model [String] The model used to generate the Chat Completions.
#
Expand Down
10 changes: 7 additions & 3 deletions lib/openai/models/conversations/conversation_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,12 @@ class ConversationCreateParams < OpenAI::Internal::Type::BaseModel
nil?: true

# @!attribute metadata
# Set of 16 key-value pairs that can be attached to an object. Useful for storing
# additional information about the object in a structured format.
# Set of 16 key-value pairs that can be attached to an object. This can be useful
# for storing additional information about the object in a structured format, and
# querying for objects via API or the dashboard.
#
# Keys are strings with a maximum length of 64 characters. Values are strings with
# a maximum length of 512 characters.
#
# @return [Hash{Symbol=>String}, nil]
optional :metadata, OpenAI::Internal::Type::HashOf[String], nil?: true
Expand All @@ -30,7 +34,7 @@ class ConversationCreateParams < OpenAI::Internal::Type::BaseModel
#
# @param items [Array<OpenAI::Models::Responses::EasyInputMessage, OpenAI::Models::Responses::ResponseInputItem::Message, OpenAI::Models::Responses::ResponseOutputMessage, OpenAI::Models::Responses::ResponseFileSearchToolCall, OpenAI::Models::Responses::ResponseComputerToolCall, OpenAI::Models::Responses::ResponseInputItem::ComputerCallOutput, OpenAI::Models::Responses::ResponseFunctionWebSearch, OpenAI::Models::Responses::ResponseFunctionToolCall, OpenAI::Models::Responses::ResponseInputItem::FunctionCallOutput, OpenAI::Models::Responses::ResponseReasoningItem, OpenAI::Models::Responses::ResponseInputItem::ImageGenerationCall, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall, OpenAI::Models::Responses::ResponseInputItem::LocalShellCall, OpenAI::Models::Responses::ResponseInputItem::LocalShellCallOutput, OpenAI::Models::Responses::ResponseInputItem::McpListTools, OpenAI::Models::Responses::ResponseInputItem::McpApprovalRequest, OpenAI::Models::Responses::ResponseInputItem::McpApprovalResponse, OpenAI::Models::Responses::ResponseInputItem::McpCall, OpenAI::Models::Responses::ResponseCustomToolCallOutput, OpenAI::Models::Responses::ResponseCustomToolCall, OpenAI::Models::Responses::ResponseInputItem::ItemReference>, nil] Initial items to include in the conversation context.
#
# @param metadata [Hash{Symbol=>String}, nil] Set of 16 key-value pairs that can be attached to an object. Useful for
# @param metadata [Hash{Symbol=>String}, nil] Set of 16 key-value pairs that can be attached to an object. This can be
#
# @param request_options [OpenAI::RequestOptions, Hash{Symbol=>Object}]
end
Expand Down
9 changes: 6 additions & 3 deletions lib/openai/models/evals/run_cancel_response.rb
Original file line number Diff line number Diff line change
Expand Up @@ -314,8 +314,11 @@ class Responses < OpenAI::Internal::Type::BaseModel
optional :model, String, nil?: true

# @!attribute reasoning_effort
# Optional reasoning effort parameter. This is a query parameter used to select
# responses.
# Constrains effort on reasoning for
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true
Expand Down Expand Up @@ -361,7 +364,7 @@ class Responses < OpenAI::Internal::Type::BaseModel
#
# @param model [String, nil] The name of the model to find responses for. This is a query parameter used to s
#
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Optional reasoning effort parameter. This is a query parameter used to select re
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Constrains effort on reasoning for
#
# @param temperature [Float, nil] Sampling temperature. This is a query parameter used to select responses.
#
Expand Down
9 changes: 6 additions & 3 deletions lib/openai/models/evals/run_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -226,8 +226,11 @@ class Responses < OpenAI::Internal::Type::BaseModel
optional :model, String, nil?: true

# @!attribute reasoning_effort
# Optional reasoning effort parameter. This is a query parameter used to select
# responses.
# Constrains effort on reasoning for
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true
Expand Down Expand Up @@ -273,7 +276,7 @@ class Responses < OpenAI::Internal::Type::BaseModel
#
# @param model [String, nil] The name of the model to find responses for. This is a query parameter used to s
#
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Optional reasoning effort parameter. This is a query parameter used to select re
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Constrains effort on reasoning for
#
# @param temperature [Float, nil] Sampling temperature. This is a query parameter used to select responses.
#
Expand Down
9 changes: 6 additions & 3 deletions lib/openai/models/evals/run_create_response.rb
Original file line number Diff line number Diff line change
Expand Up @@ -314,8 +314,11 @@ class Responses < OpenAI::Internal::Type::BaseModel
optional :model, String, nil?: true

# @!attribute reasoning_effort
# Optional reasoning effort parameter. This is a query parameter used to select
# responses.
# Constrains effort on reasoning for
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true
Expand Down Expand Up @@ -361,7 +364,7 @@ class Responses < OpenAI::Internal::Type::BaseModel
#
# @param model [String, nil] The name of the model to find responses for. This is a query parameter used to s
#
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Optional reasoning effort parameter. This is a query parameter used to select re
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Constrains effort on reasoning for
#
# @param temperature [Float, nil] Sampling temperature. This is a query parameter used to select responses.
#
Expand Down
9 changes: 6 additions & 3 deletions lib/openai/models/evals/run_list_response.rb
Original file line number Diff line number Diff line change
Expand Up @@ -314,8 +314,11 @@ class Responses < OpenAI::Internal::Type::BaseModel
optional :model, String, nil?: true

# @!attribute reasoning_effort
# Optional reasoning effort parameter. This is a query parameter used to select
# responses.
# Constrains effort on reasoning for
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true
Expand Down Expand Up @@ -361,7 +364,7 @@ class Responses < OpenAI::Internal::Type::BaseModel
#
# @param model [String, nil] The name of the model to find responses for. This is a query parameter used to s
#
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Optional reasoning effort parameter. This is a query parameter used to select re
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Constrains effort on reasoning for
#
# @param temperature [Float, nil] Sampling temperature. This is a query parameter used to select responses.
#
Expand Down
9 changes: 6 additions & 3 deletions lib/openai/models/evals/run_retrieve_response.rb
Original file line number Diff line number Diff line change
Expand Up @@ -314,8 +314,11 @@ class Responses < OpenAI::Internal::Type::BaseModel
optional :model, String, nil?: true

# @!attribute reasoning_effort
# Optional reasoning effort parameter. This is a query parameter used to select
# responses.
# Constrains effort on reasoning for
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true
Expand Down Expand Up @@ -361,7 +364,7 @@ class Responses < OpenAI::Internal::Type::BaseModel
#
# @param model [String, nil] The name of the model to find responses for. This is a query parameter used to s
#
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Optional reasoning effort parameter. This is a query parameter used to select re
# @param reasoning_effort [Symbol, OpenAI::Models::ReasoningEffort, nil] Constrains effort on reasoning for
#
# @param temperature [Float, nil] Sampling temperature. This is a query parameter used to select responses.
#
Expand Down
2 changes: 1 addition & 1 deletion lib/openai/resources/chat/completions.rb
Original file line number Diff line number Diff line change
Expand Up @@ -387,7 +387,7 @@ def update(completion_id, params)
#
# @param limit [Integer] Number of Chat Completions to retrieve.
#
# @param metadata [Hash{Symbol=>String}, nil] A list of metadata keys to filter the Chat Completions by. Example:
# @param metadata [Hash{Symbol=>String}, nil] Set of 16 key-value pairs that can be attached to an object. This can be
#
# @param model [String] The model used to generate the Chat Completions.
#
Expand Down
2 changes: 1 addition & 1 deletion lib/openai/resources/conversations.rb
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ class Conversations
#
# @param items [Array<OpenAI::Models::Responses::EasyInputMessage, OpenAI::Models::Responses::ResponseInputItem::Message, OpenAI::Models::Responses::ResponseOutputMessage, OpenAI::Models::Responses::ResponseFileSearchToolCall, OpenAI::Models::Responses::ResponseComputerToolCall, OpenAI::Models::Responses::ResponseInputItem::ComputerCallOutput, OpenAI::Models::Responses::ResponseFunctionWebSearch, OpenAI::Models::Responses::ResponseFunctionToolCall, OpenAI::Models::Responses::ResponseInputItem::FunctionCallOutput, OpenAI::Models::Responses::ResponseReasoningItem, OpenAI::Models::Responses::ResponseInputItem::ImageGenerationCall, OpenAI::Models::Responses::ResponseCodeInterpreterToolCall, OpenAI::Models::Responses::ResponseInputItem::LocalShellCall, OpenAI::Models::Responses::ResponseInputItem::LocalShellCallOutput, OpenAI::Models::Responses::ResponseInputItem::McpListTools, OpenAI::Models::Responses::ResponseInputItem::McpApprovalRequest, OpenAI::Models::Responses::ResponseInputItem::McpApprovalResponse, OpenAI::Models::Responses::ResponseInputItem::McpCall, OpenAI::Models::Responses::ResponseCustomToolCallOutput, OpenAI::Models::Responses::ResponseCustomToolCall, OpenAI::Models::Responses::ResponseInputItem::ItemReference>, nil] Initial items to include in the conversation context.
#
# @param metadata [Hash{Symbol=>String}, nil] Set of 16 key-value pairs that can be attached to an object. Useful for
# @param metadata [Hash{Symbol=>String}, nil] Set of 16 key-value pairs that can be attached to an object. This can be
#
# @param request_options [OpenAI::RequestOptions, Hash{Symbol=>Object}, nil]
#
Expand Down
2 changes: 1 addition & 1 deletion lib/openai/version.rb
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# frozen_string_literal: true

module OpenAI
VERSION = "0.23.2"
VERSION = "0.23.3"
end
14 changes: 10 additions & 4 deletions rbi/openai/models/chat/completion_list_params.rbi
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,12 @@ module OpenAI
sig { params(limit: Integer).void }
attr_writer :limit

# A list of metadata keys to filter the Chat Completions by. Example:
# Set of 16 key-value pairs that can be attached to an object. This can be useful
# for storing additional information about the object in a structured format, and
# querying for objects via API or the dashboard.
#
# `metadata[key1]=value1&metadata[key2]=value2`
# Keys are strings with a maximum length of 64 characters. Values are strings with
# a maximum length of 512 characters.
sig { returns(T.nilable(T::Hash[Symbol, String])) }
attr_accessor :metadata

Expand Down Expand Up @@ -70,9 +73,12 @@ module OpenAI
after: nil,
# Number of Chat Completions to retrieve.
limit: nil,
# A list of metadata keys to filter the Chat Completions by. Example:
# Set of 16 key-value pairs that can be attached to an object. This can be useful
# for storing additional information about the object in a structured format, and
# querying for objects via API or the dashboard.
#
# `metadata[key1]=value1&metadata[key2]=value2`
# Keys are strings with a maximum length of 64 characters. Values are strings with
# a maximum length of 512 characters.
metadata: nil,
# The model used to generate the Chat Completions.
model: nil,
Expand Down
16 changes: 12 additions & 4 deletions rbi/openai/models/conversations/conversation_create_params.rbi
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,12 @@ module OpenAI
end
attr_accessor :items

# Set of 16 key-value pairs that can be attached to an object. Useful for storing
# additional information about the object in a structured format.
# Set of 16 key-value pairs that can be attached to an object. This can be useful
# for storing additional information about the object in a structured format, and
# querying for objects via API or the dashboard.
#
# Keys are strings with a maximum length of 64 characters. Values are strings with
# a maximum length of 512 characters.
sig { returns(T.nilable(T::Hash[Symbol, String])) }
attr_accessor :metadata

Expand Down Expand Up @@ -93,8 +97,12 @@ module OpenAI
# Initial items to include in the conversation context. You may add up to 20 items
# at a time.
items: nil,
# Set of 16 key-value pairs that can be attached to an object. Useful for storing
# additional information about the object in a structured format.
# Set of 16 key-value pairs that can be attached to an object. This can be useful
# for storing additional information about the object in a structured format, and
# querying for objects via API or the dashboard.
#
# Keys are strings with a maximum length of 64 characters. Values are strings with
# a maximum length of 512 characters.
metadata: nil,
request_options: {}
)
Expand Down
14 changes: 10 additions & 4 deletions rbi/openai/models/evals/run_cancel_response.rbi
Original file line number Diff line number Diff line change
Expand Up @@ -510,8 +510,11 @@ module OpenAI
sig { returns(T.nilable(String)) }
attr_accessor :model

# Optional reasoning effort parameter. This is a query parameter used to select
# responses.
# Constrains effort on reasoning for
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
sig do
returns(T.nilable(OpenAI::ReasoningEffort::TaggedSymbol))
end
Expand Down Expand Up @@ -566,8 +569,11 @@ module OpenAI
# The name of the model to find responses for. This is a query parameter used to
# select responses.
model: nil,
# Optional reasoning effort parameter. This is a query parameter used to select
# responses.
# Constrains effort on reasoning for
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
reasoning_effort: nil,
# Sampling temperature. This is a query parameter used to select responses.
temperature: nil,
Expand Down
Loading