Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.18.0"
".": "0.18.1"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 109
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-24be531010b354303d741fc9247c1f84f75978f9f7de68aca92cb4f240a04722.yml
openapi_spec_hash: 3e46f439f6a863beadc71577eb4efa15
config_hash: ed87b9139ac595a04a2162d754df2fed
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-7ef7a457c3bf05364e66e48c9ca34f31bfef1f6c9b7c15b1812346105e0abb16.yml
openapi_spec_hash: a2b1f5d8fbb62175c93b0ebea9f10063
config_hash: 76afa3236f36854a8705f1281b1990b8
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Changelog

## 0.18.1 (2025-08-19)

Full Changelog: [v0.18.0...v0.18.1](https://github.com/openai/openai-ruby/compare/v0.18.0...v0.18.1)

### Chores

* **api:** accurately represent shape for verbosity on Chat Completions ([a19cd00](https://github.com/openai/openai-ruby/commit/a19cd00e6df3cc3f47239a25fe15f33c2cb77962))

## 0.18.0 (2025-08-15)

Full Changelog: [v0.17.1...v0.18.0](https://github.com/openai/openai-ruby/compare/v0.17.1...v0.18.0)
Expand Down
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ GIT
PATH
remote: .
specs:
openai (0.18.0)
openai (0.18.1)
connection_pool

GEM
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
<!-- x-release-please-start-version -->

```ruby
gem "openai", "~> 0.18.0"
gem "openai", "~> 0.18.1"
```

<!-- x-release-please-end -->
Expand Down
43 changes: 2 additions & 41 deletions lib/openai/models/chat/completion_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@ class CompletionCreateParams < OpenAI::Internal::Type::BaseModel
# our [model distillation](https://platform.openai.com/docs/guides/distillation)
# or [evals](https://platform.openai.com/docs/guides/evals) products.
#
# Supports text and image inputs. Note: image inputs over 10MB will be dropped.
# Supports text and image inputs. Note: image inputs over 8MB will be dropped.
#
# @return [Boolean, nil]
optional :store, OpenAI::Internal::Type::Boolean, nil?: true
Expand All @@ -292,11 +292,6 @@ class CompletionCreateParams < OpenAI::Internal::Type::BaseModel
# @return [Float, nil]
optional :temperature, Float, nil?: true

# @!attribute text
#
# @return [OpenAI::Models::Chat::CompletionCreateParams::Text, nil]
optional :text, -> { OpenAI::Chat::CompletionCreateParams::Text }

# @!attribute tool_choice
# Controls which (if any) tool is called by the model. `none` means the model will
# not call any tool and instead generates a message. `auto` means the model can
Expand Down Expand Up @@ -370,7 +365,7 @@ class CompletionCreateParams < OpenAI::Internal::Type::BaseModel
# @return [OpenAI::Models::Chat::CompletionCreateParams::WebSearchOptions, nil]
optional :web_search_options, -> { OpenAI::Chat::CompletionCreateParams::WebSearchOptions }

# @!method initialize(messages:, model:, audio: nil, frequency_penalty: nil, function_call: nil, functions: nil, logit_bias: nil, logprobs: nil, max_completion_tokens: nil, max_tokens: nil, metadata: nil, modalities: nil, n: nil, parallel_tool_calls: nil, prediction: nil, presence_penalty: nil, prompt_cache_key: nil, reasoning_effort: nil, response_format: nil, safety_identifier: nil, seed: nil, service_tier: nil, stop: nil, store: nil, stream_options: nil, temperature: nil, text: nil, tool_choice: nil, tools: nil, top_logprobs: nil, top_p: nil, user: nil, verbosity: nil, web_search_options: nil, request_options: {})
# @!method initialize(messages:, model:, audio: nil, frequency_penalty: nil, function_call: nil, functions: nil, logit_bias: nil, logprobs: nil, max_completion_tokens: nil, max_tokens: nil, metadata: nil, modalities: nil, n: nil, parallel_tool_calls: nil, prediction: nil, presence_penalty: nil, prompt_cache_key: nil, reasoning_effort: nil, response_format: nil, safety_identifier: nil, seed: nil, service_tier: nil, stop: nil, store: nil, stream_options: nil, temperature: nil, tool_choice: nil, tools: nil, top_logprobs: nil, top_p: nil, user: nil, verbosity: nil, web_search_options: nil, request_options: {})
# Some parameter documentations has been truncated, see
# {OpenAI::Models::Chat::CompletionCreateParams} for more details.
#
Expand Down Expand Up @@ -426,8 +421,6 @@ class CompletionCreateParams < OpenAI::Internal::Type::BaseModel
#
# @param temperature [Float, nil] What sampling temperature to use, between 0 and 2. Higher values like 0.8 will m
#
# @param text [OpenAI::Models::Chat::CompletionCreateParams::Text]
#
# @param tool_choice [Symbol, OpenAI::Models::Chat::ChatCompletionToolChoiceOption::Auto, OpenAI::Models::Chat::ChatCompletionAllowedToolChoice, OpenAI::Models::Chat::ChatCompletionNamedToolChoice, OpenAI::Models::Chat::ChatCompletionNamedToolChoiceCustom] Controls which (if any) tool is called by the model.
#
# @param tools [Array<OpenAI::StructuredOutput::JsonSchemaConverter, OpenAI::Models::Chat::ChatCompletionFunctionTool, OpenAI::Models::Chat::ChatCompletionCustomTool>] A list of tools the model may call. You can provide either
Expand Down Expand Up @@ -638,38 +631,6 @@ module Stop
StringArray = OpenAI::Internal::Type::ArrayOf[String]
end

class Text < OpenAI::Internal::Type::BaseModel
# @!attribute verbosity
# Constrains the verbosity of the model's response. Lower values will result in
# more concise responses, while higher values will result in more verbose
# responses. Currently supported values are `low`, `medium`, and `high`.
#
# @return [Symbol, OpenAI::Models::Chat::CompletionCreateParams::Text::Verbosity, nil]
optional :verbosity, enum: -> { OpenAI::Chat::CompletionCreateParams::Text::Verbosity }, nil?: true

# @!method initialize(verbosity: nil)
# Some parameter documentations has been truncated, see
# {OpenAI::Models::Chat::CompletionCreateParams::Text} for more details.
#
# @param verbosity [Symbol, OpenAI::Models::Chat::CompletionCreateParams::Text::Verbosity, nil] Constrains the verbosity of the model's response. Lower values will result in

# Constrains the verbosity of the model's response. Lower values will result in
# more concise responses, while higher values will result in more verbose
# responses. Currently supported values are `low`, `medium`, and `high`.
#
# @see OpenAI::Models::Chat::CompletionCreateParams::Text#verbosity
module Verbosity
extend OpenAI::Internal::Type::Enum

LOW = :low
MEDIUM = :medium
HIGH = :high

# @!method self.values
# @return [Array<Symbol>]
end
end

# Constrains the verbosity of the model's response. Lower values will result in
# more concise responses, while higher values will result in more verbose
# responses. Currently supported values are `low`, `medium`, and `high`.
Expand Down
11 changes: 6 additions & 5 deletions lib/openai/models/graders/text_similarity_grader.rb
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ module Models
module Graders
class TextSimilarityGrader < OpenAI::Internal::Type::BaseModel
# @!attribute evaluation_metric
# The evaluation metric to use. One of `fuzzy_match`, `bleu`, `gleu`, `meteor`,
# `rouge_1`, `rouge_2`, `rouge_3`, `rouge_4`, `rouge_5`, or `rouge_l`.
# The evaluation metric to use. One of `cosine`, `fuzzy_match`, `bleu`, `gleu`,
# `meteor`, `rouge_1`, `rouge_2`, `rouge_3`, `rouge_4`, `rouge_5`, or `rouge_l`.
#
# @return [Symbol, OpenAI::Models::Graders::TextSimilarityGrader::EvaluationMetric]
required :evaluation_metric, enum: -> { OpenAI::Graders::TextSimilarityGrader::EvaluationMetric }
Expand Down Expand Up @@ -41,7 +41,7 @@ class TextSimilarityGrader < OpenAI::Internal::Type::BaseModel
#
# A TextSimilarityGrader object which grades text based on similarity metrics.
#
# @param evaluation_metric [Symbol, OpenAI::Models::Graders::TextSimilarityGrader::EvaluationMetric] The evaluation metric to use. One of `fuzzy_match`, `bleu`, `gleu`, `meteor`, `r
# @param evaluation_metric [Symbol, OpenAI::Models::Graders::TextSimilarityGrader::EvaluationMetric] The evaluation metric to use. One of `cosine`, `fuzzy_match`, `bleu`,
#
# @param input [String] The text being graded.
#
Expand All @@ -51,13 +51,14 @@ class TextSimilarityGrader < OpenAI::Internal::Type::BaseModel
#
# @param type [Symbol, :text_similarity] The type of grader.

# The evaluation metric to use. One of `fuzzy_match`, `bleu`, `gleu`, `meteor`,
# `rouge_1`, `rouge_2`, `rouge_3`, `rouge_4`, `rouge_5`, or `rouge_l`.
# The evaluation metric to use. One of `cosine`, `fuzzy_match`, `bleu`, `gleu`,
# `meteor`, `rouge_1`, `rouge_2`, `rouge_3`, `rouge_4`, `rouge_5`, or `rouge_l`.
#
# @see OpenAI::Models::Graders::TextSimilarityGrader#evaluation_metric
module EvaluationMetric
extend OpenAI::Internal::Type::Enum

COSINE = :cosine
FUZZY_MATCH = :fuzzy_match
BLEU = :bleu
GLEU = :gleu
Expand Down
64 changes: 8 additions & 56 deletions lib/openai/models/responses/response.rb
Original file line number Diff line number Diff line change
Expand Up @@ -229,9 +229,14 @@ class Response < OpenAI::Internal::Type::BaseModel
optional :status, enum: -> { OpenAI::Responses::ResponseStatus }

# @!attribute text
# Configuration options for a text response from the model. Can be plain text or
# structured JSON data. Learn more:
#
# @return [OpenAI::Models::Responses::Response::Text, nil]
optional :text, -> { OpenAI::Responses::Response::Text }
# - [Text inputs and outputs](https://platform.openai.com/docs/guides/text)
# - [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs)
#
# @return [OpenAI::Models::Responses::ResponseTextConfig, nil]
optional :text, -> { OpenAI::Responses::ResponseTextConfig }

# @!attribute top_logprobs
# An integer between 0 and 20 specifying the number of most likely tokens to
Expand Down Expand Up @@ -341,7 +346,7 @@ def output_text
#
# @param status [Symbol, OpenAI::Models::Responses::ResponseStatus] The status of the response generation. One of `completed`, `failed`,
#
# @param text [OpenAI::Models::Responses::Response::Text]
# @param text [OpenAI::Models::Responses::ResponseTextConfig] Configuration options for a text response from the model. Can be plain
#
# @param top_logprobs [Integer, nil] An integer between 0 and 20 specifying the number of most likely tokens to
#
Expand Down Expand Up @@ -475,59 +480,6 @@ module ServiceTier
# @return [Array<Symbol>]
end

# @see OpenAI::Models::Responses::Response#text
class Text < OpenAI::Internal::Type::BaseModel
# @!attribute format_
# An object specifying the format that the model must output.
#
# Configuring `{ "type": "json_schema" }` enables Structured Outputs, which
# ensures the model will match your supplied JSON schema. Learn more in the
# [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
#
# The default format is `{ "type": "text" }` with no additional options.
#
# **Not recommended for gpt-4o and newer models:**
#
# Setting to `{ "type": "json_object" }` enables the older JSON mode, which
# ensures the message the model generates is valid JSON. Using `json_schema` is
# preferred for models that support it.
#
# @return [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject, nil]
optional :format_, union: -> { OpenAI::Responses::ResponseFormatTextConfig }, api_name: :format

# @!attribute verbosity
# Constrains the verbosity of the model's response. Lower values will result in
# more concise responses, while higher values will result in more verbose
# responses. Currently supported values are `low`, `medium`, and `high`.
#
# @return [Symbol, OpenAI::Models::Responses::Response::Text::Verbosity, nil]
optional :verbosity, enum: -> { OpenAI::Responses::Response::Text::Verbosity }, nil?: true

# @!method initialize(format_: nil, verbosity: nil)
# Some parameter documentations has been truncated, see
# {OpenAI::Models::Responses::Response::Text} for more details.
#
# @param format_ [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
#
# @param verbosity [Symbol, OpenAI::Models::Responses::Response::Text::Verbosity, nil] Constrains the verbosity of the model's response. Lower values will result in

# Constrains the verbosity of the model's response. Lower values will result in
# more concise responses, while higher values will result in more verbose
# responses. Currently supported values are `low`, `medium`, and `high`.
#
# @see OpenAI::Models::Responses::Response::Text#verbosity
module Verbosity
extend OpenAI::Internal::Type::Enum

LOW = :low
MEDIUM = :medium
HIGH = :high

# @!method self.values
# @return [Array<Symbol>]
end
end

# The truncation strategy to use for the model response.
#
# - `auto`: If the context of this response and previous ones exceeds the model's
Expand Down
60 changes: 3 additions & 57 deletions lib/openai/models/responses/response_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -193,6 +193,8 @@ class ResponseCreateParams < OpenAI::Internal::Type::BaseModel
optional :temperature, Float, nil?: true

# @!attribute text
# Configuration options for a text response from the model. Can be plain text or
# structured JSON data. Learn more:
#
# - [Text inputs and outputs](https://platform.openai.com/docs/guides/text)
# - [Structured Outputs](https://platform.openai.com/docs/guides/structured-outputs)
Expand Down Expand Up @@ -316,7 +318,7 @@ class ResponseCreateParams < OpenAI::Internal::Type::BaseModel
#
# @param temperature [Float, nil] What sampling temperature to use, between 0 and 2. Higher values like 0.8 will m
#
# @param text [OpenAI::Models::Responses::ResponseCreateParams::Text]
# @param text [OpenAI::Models::Responses::ResponseTextConfig] Configuration options for a text response from the model. Can be plain
#
# @param tool_choice [Symbol, OpenAI::Models::Responses::ToolChoiceOptions, OpenAI::Models::Responses::ToolChoiceAllowed, OpenAI::Models::Responses::ToolChoiceTypes, OpenAI::Models::Responses::ToolChoiceFunction, OpenAI::Models::Responses::ToolChoiceMcp, OpenAI::Models::Responses::ToolChoiceCustom] How the model should select which tool (or tools) to use when generating
#
Expand Down Expand Up @@ -407,62 +409,6 @@ class StreamOptions < OpenAI::Internal::Type::BaseModel
# @param include_obfuscation [Boolean] When true, stream obfuscation will be enabled. Stream obfuscation adds
end

class Text < OpenAI::Internal::Type::BaseModel
# @!attribute format_
# An object specifying the format that the model must output.
#
# Configuring `{ "type": "json_schema" }` enables Structured Outputs, which
# ensures the model will match your supplied JSON schema. Learn more in the
# [Structured Outputs guide](https://platform.openai.com/docs/guides/structured-outputs).
#
# The default format is `{ "type": "text" }` with no additional options.
#
# **Not recommended for gpt-4o and newer models:**
#
# Setting to `{ "type": "json_object" }` enables the older JSON mode, which
# ensures the message the model generates is valid JSON. Using `json_schema` is
# preferred for models that support it.
#
# @return [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject, nil]
optional :format_, union: -> { OpenAI::Responses::ResponseFormatTextConfig }, api_name: :format

# @!attribute verbosity
# Constrains the verbosity of the model's response. Lower values will result in
# more concise responses, while higher values will result in more verbose
# responses. Currently supported values are `low`, `medium`, and `high`.
#
# @return [Symbol, OpenAI::Models::Responses::ResponseCreateParams::Text::Verbosity, nil]
optional :verbosity,
enum: -> {
OpenAI::Responses::ResponseCreateParams::Text::Verbosity
},
nil?: true

# @!method initialize(format_: nil, verbosity: nil)
# Some parameter documentations has been truncated, see
# {OpenAI::Models::Responses::ResponseCreateParams::Text} for more details.
#
# @param format_ [OpenAI::Models::ResponseFormatText, OpenAI::Models::Responses::ResponseFormatTextJSONSchemaConfig, OpenAI::Models::ResponseFormatJSONObject] An object specifying the format that the model must output.
#
# @param verbosity [Symbol, OpenAI::Models::Responses::ResponseCreateParams::Text::Verbosity, nil] Constrains the verbosity of the model's response. Lower values will result in

# Constrains the verbosity of the model's response. Lower values will result in
# more concise responses, while higher values will result in more verbose
# responses. Currently supported values are `low`, `medium`, and `high`.
#
# @see OpenAI::Models::Responses::ResponseCreateParams::Text#verbosity
module Verbosity
extend OpenAI::Internal::Type::Enum

LOW = :low
MEDIUM = :medium
HIGH = :high

# @!method self.values
# @return [Array<Symbol>]
end
end

# How the model should select which tool (or tools) to use when generating a
# response. See the `tools` parameter to see how to specify which tools the model
# can call.
Expand Down
Loading