Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.30.0"
".": "0.31.0"
}
4 changes: 4 additions & 0 deletions .rubocop.yml
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,10 @@ Metrics/BlockLength:
Metrics/ClassLength:
Enabled: false

Metrics/CollectionLiteralLength:
Exclude:
- "test/**/*"

Metrics/CyclomaticComplexity:
Enabled: false

Expand Down
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 135
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-d64cf80d2ebddf175c5578f68226a3d5bbd3f7fd8d62ccac2205f3fc05a355ee.yml
openapi_spec_hash: d51e0d60d0c536f210b597a211bc5af0
config_hash: e7c42016df9c6bd7bd6ff15101b9bc9b
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-e66e85fb7f72477256dca1acb6b23396989d381c5c1b318de564195436bcb93f.yml
openapi_spec_hash: 0a4bbb5aa0ae532a072bd6b3854e70b1
config_hash: 89bf7bb3a1f9439ffc6ea0e7dc57ba9b
14 changes: 14 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,19 @@
# Changelog

## 0.31.0 (2025-10-10)

Full Changelog: [v0.30.0...v0.31.0](https://github.com/openai/openai-ruby/compare/v0.30.0...v0.31.0)

### Features

* **api:** comparison filter in/not in ([ac3e58b](https://github.com/openai/openai-ruby/commit/ac3e58bbee0c919ac84c4b3ac8b67955bca7ba88))


### Chores

* ignore linter error for tests having large collections ([90c4440](https://github.com/openai/openai-ruby/commit/90c44400f8713b7d2d0b51142f4ed5509dbca713))
* simplify model references ([d18c5af](https://github.com/openai/openai-ruby/commit/d18c5af9d05ae63616f2c83fb228c15f37cdddb0))

## 0.30.0 (2025-10-06)

Full Changelog: [v0.29.0...v0.30.0](https://github.com/openai/openai-ruby/compare/v0.29.0...v0.30.0)
Expand Down
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ GIT
PATH
remote: .
specs:
openai (0.30.0)
openai (0.31.0)
connection_pool

GEM
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
<!-- x-release-please-start-version -->

```ruby
gem "openai", "~> 0.30.0"
gem "openai", "~> 0.31.0"
```

<!-- x-release-please-end -->
Expand Down
3 changes: 3 additions & 0 deletions lib/openai/models/beta/assistant_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,9 @@ class AssistantCreateParams < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down
3 changes: 3 additions & 0 deletions lib/openai/models/beta/assistant_update_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,9 @@ class AssistantUpdateParams < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down
3 changes: 3 additions & 0 deletions lib/openai/models/beta/threads/run_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,9 @@ class RunCreateParams < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down
3 changes: 3 additions & 0 deletions lib/openai/models/chat/completion_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -197,6 +197,9 @@ class CompletionCreateParams < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down
35 changes: 29 additions & 6 deletions lib/openai/models/comparison_filter.rb
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,17 @@ class ComparisonFilter < OpenAI::Internal::Type::BaseModel
required :key, String

# @!attribute type
# Specifies the comparison operator: `eq`, `ne`, `gt`, `gte`, `lt`, `lte`.
# Specifies the comparison operator: `eq`, `ne`, `gt`, `gte`, `lt`, `lte`, `in`,
# `nin`.
#
# - `eq`: equals
# - `ne`: not equal
# - `gt`: greater than
# - `gte`: greater than or equal
# - `lt`: less than
# - `lte`: less than or equal
# - `in`: in
# - `nin`: not in
#
# @return [Symbol, OpenAI::Models::ComparisonFilter::Type]
required :type, enum: -> { OpenAI::ComparisonFilter::Type }
Expand All @@ -26,7 +29,7 @@ class ComparisonFilter < OpenAI::Internal::Type::BaseModel
# The value to compare against the attribute key; supports string, number, or
# boolean types.
#
# @return [String, Float, Boolean]
# @return [String, Float, Boolean, Array<String, Float>]
required :value, union: -> { OpenAI::ComparisonFilter::Value }

# @!method initialize(key:, type:, value:)
Expand All @@ -38,18 +41,21 @@ class ComparisonFilter < OpenAI::Internal::Type::BaseModel
#
# @param key [String] The key to compare against the value.
#
# @param type [Symbol, OpenAI::Models::ComparisonFilter::Type] Specifies the comparison operator: `eq`, `ne`, `gt`, `gte`, `lt`, `lte`.
# @param type [Symbol, OpenAI::Models::ComparisonFilter::Type] Specifies the comparison operator: `eq`, `ne`, `gt`, `gte`, `lt`, `lte`, `in`, `
#
# @param value [String, Float, Boolean] The value to compare against the attribute key; supports string, number, or bool
# @param value [String, Float, Boolean, Array<String, Float>] The value to compare against the attribute key; supports string, number, or bool

# Specifies the comparison operator: `eq`, `ne`, `gt`, `gte`, `lt`, `lte`.
# Specifies the comparison operator: `eq`, `ne`, `gt`, `gte`, `lt`, `lte`, `in`,
# `nin`.
#
# - `eq`: equals
# - `ne`: not equal
# - `gt`: greater than
# - `gte`: greater than or equal
# - `lt`: less than
# - `lte`: less than or equal
# - `in`: in
# - `nin`: not in
#
# @see OpenAI::Models::ComparisonFilter#type
module Type
Expand Down Expand Up @@ -79,8 +85,25 @@ module Value

variant OpenAI::Internal::Type::Boolean

variant -> { OpenAI::Models::ComparisonFilter::Value::UnionMember3Array }

module UnionMember3
extend OpenAI::Internal::Type::Union

variant String

variant Float

# @!method self.variants
# @return [Array(String, Float)]
end

# @!method self.variants
# @return [Array(String, Float, Boolean)]
# @return [Array(String, Float, Boolean, Array<String, Float>)]

# @type [OpenAI::Internal::Type::Converter]
UnionMember3Array =
OpenAI::Internal::Type::ArrayOf[union: -> { OpenAI::ComparisonFilter::Value::UnionMember3 }]
end
end
end
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -466,6 +466,9 @@ class SamplingParams < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down
6 changes: 6 additions & 0 deletions lib/openai/models/evals/run_cancel_response.rb
Original file line number Diff line number Diff line change
Expand Up @@ -320,6 +320,9 @@ class Responses < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down Expand Up @@ -661,6 +664,9 @@ class SamplingParams < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down
6 changes: 6 additions & 0 deletions lib/openai/models/evals/run_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -232,6 +232,9 @@ class Responses < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down Expand Up @@ -589,6 +592,9 @@ class SamplingParams < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down
6 changes: 6 additions & 0 deletions lib/openai/models/evals/run_create_response.rb
Original file line number Diff line number Diff line change
Expand Up @@ -320,6 +320,9 @@ class Responses < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down Expand Up @@ -661,6 +664,9 @@ class SamplingParams < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down
6 changes: 6 additions & 0 deletions lib/openai/models/evals/run_list_response.rb
Original file line number Diff line number Diff line change
Expand Up @@ -320,6 +320,9 @@ class Responses < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down Expand Up @@ -661,6 +664,9 @@ class SamplingParams < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down
6 changes: 6 additions & 0 deletions lib/openai/models/evals/run_retrieve_response.rb
Original file line number Diff line number Diff line change
Expand Up @@ -320,6 +320,9 @@ class Responses < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down Expand Up @@ -665,6 +668,9 @@ class SamplingParams < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down
3 changes: 3 additions & 0 deletions lib/openai/models/graders/score_model_grader.rb
Original file line number Diff line number Diff line change
Expand Up @@ -226,6 +226,9 @@ class SamplingParams < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down
3 changes: 3 additions & 0 deletions lib/openai/models/reasoning.rb
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,9 @@ class Reasoning < OpenAI::Internal::Type::BaseModel
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true

Expand Down
3 changes: 3 additions & 0 deletions lib/openai/models/reasoning_effort.rb
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,9 @@ module Models
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
module ReasoningEffort
extend OpenAI::Internal::Type::Enum

Expand Down
6 changes: 3 additions & 3 deletions lib/openai/models/vector_stores/vector_store_file.rb
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ class VectorStoreFile < OpenAI::Internal::Type::BaseModel
# @see OpenAI::Models::VectorStores::VectorStoreFile#last_error
class LastError < OpenAI::Internal::Type::BaseModel
# @!attribute code
# One of `server_error` or `rate_limit_exceeded`.
# One of `server_error`, `unsupported_file`, or `invalid_file`.
#
# @return [Symbol, OpenAI::Models::VectorStores::VectorStoreFile::LastError::Code]
required :code, enum: -> { OpenAI::VectorStores::VectorStoreFile::LastError::Code }
Expand All @@ -116,11 +116,11 @@ class LastError < OpenAI::Internal::Type::BaseModel
# The last error associated with this vector store file. Will be `null` if there
# are no errors.
#
# @param code [Symbol, OpenAI::Models::VectorStores::VectorStoreFile::LastError::Code] One of `server_error` or `rate_limit_exceeded`.
# @param code [Symbol, OpenAI::Models::VectorStores::VectorStoreFile::LastError::Code] One of `server_error`, `unsupported_file`, or `invalid_file`.
#
# @param message [String] A human-readable description of the error.

# One of `server_error` or `rate_limit_exceeded`.
# One of `server_error`, `unsupported_file`, or `invalid_file`.
#
# @see OpenAI::Models::VectorStores::VectorStoreFile::LastError#code
module Code
Expand Down
2 changes: 1 addition & 1 deletion lib/openai/resources/files.rb
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ def list(params = {})
)
end

# Delete a file.
# Delete a file and remove it from all vector stores.
#
# @overload delete(file_id, request_options: {})
#
Expand Down
2 changes: 1 addition & 1 deletion lib/openai/version.rb
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# frozen_string_literal: true

module OpenAI
VERSION = "0.30.0"
VERSION = "0.31.0"
end
6 changes: 6 additions & 0 deletions rbi/openai/models/beta/assistant_create_params.rbi
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,9 @@ module OpenAI
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
sig { returns(T.nilable(OpenAI::ReasoningEffort::OrSymbol)) }
attr_accessor :reasoning_effort

Expand Down Expand Up @@ -212,6 +215,9 @@ module OpenAI
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
reasoning_effort: nil,
# Specifies the format that the model must output. Compatible with
# [GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
Expand Down
Loading