Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.27.1"
".": "0.28.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 118
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-410219ea680089f02bb55163c673919703f946c3d6ad7ff5d6f607121d5287d5.yml
openapi_spec_hash: 2b3eee95d3f6796c7a61dfddf694a59a
config_hash: 666d6bb4b564f0d9d431124b5d1a0665
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-49233088b5e73dbb96bf7af27be3d4547632e3db1c2b00f14184900613325bbc.yml
openapi_spec_hash: b34f14b141d5019244112901c5c7c2d8
config_hash: 94e9ba08201c3d1ca46e093e6a0138fa
17 changes: 17 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,22 @@
# Changelog

## 0.28.0 (2025-09-30)

Full Changelog: [v0.27.1...v0.28.0](https://github.com/openai/openai-ruby/compare/v0.27.1...v0.28.0)

### ⚠ BREAKING CHANGES

* **api:** `ResponseFunctionToolCallOutputItem.output` and `ResponseCustomToolCallOutput.output` now return `string | Array<ResponseInputText | ResponseInputImage | ResponseInputFile>` instead of `string` only. This may break existing callsites that assume `output` is always a string.

### Features

* **api:** Support images and files for function call outputs in responses, BatchUsage ([904348a](https://github.com/openai/openai-ruby/commit/904348a26c713601f10063fef73f9982088aa438))


### Bug Fixes

* coroutine leaks from connection pool ([7f0b3cd](https://github.com/openai/openai-ruby/commit/7f0b3cdfee0232dbfa1800029ba80f5470f95c13))

## 0.27.1 (2025-09-29)

Full Changelog: [v0.27.0...v0.27.1](https://github.com/openai/openai-ruby/compare/v0.27.0...v0.27.1)
Expand Down
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ GIT
PATH
remote: .
specs:
openai (0.27.1)
openai (0.28.0)
connection_pool

GEM
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
<!-- x-release-please-start-version -->

```ruby
gem "openai", "~> 0.27.1"
gem "openai", "~> 0.28.0"
```

<!-- x-release-please-end -->
Expand Down
6 changes: 6 additions & 0 deletions lib/openai.rb
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,7 @@
require_relative "openai/models/batch_list_params"
require_relative "openai/models/batch_request_counts"
require_relative "openai/models/batch_retrieve_params"
require_relative "openai/models/batch_usage"
require_relative "openai/models/beta/assistant"
require_relative "openai/models/beta/assistant_create_params"
require_relative "openai/models/beta/assistant_deleted"
Expand Down Expand Up @@ -537,6 +538,8 @@
require_relative "openai/models/responses/response_format_text_json_schema_config"
require_relative "openai/models/responses/response_function_call_arguments_delta_event"
require_relative "openai/models/responses/response_function_call_arguments_done_event"
require_relative "openai/models/responses/response_function_call_output_item"
require_relative "openai/models/responses/response_function_call_output_item_list"
require_relative "openai/models/responses/response_function_tool_call_item"
require_relative "openai/models/responses/response_function_tool_call_output_item"
require_relative "openai/models/responses/response_function_web_search"
Expand All @@ -550,9 +553,12 @@
require_relative "openai/models/responses/response_input"
require_relative "openai/models/responses/response_input_audio"
require_relative "openai/models/responses/response_input_content"
require_relative "openai/models/responses/response_input_file_content"
require_relative "openai/models/responses/response_input_image_content"
require_relative "openai/models/responses/response_input_item"
require_relative "openai/models/responses/response_input_message_content_list"
require_relative "openai/models/responses/response_input_message_item"
require_relative "openai/models/responses/response_input_text_content"
require_relative "openai/models/responses/response_item"
require_relative "openai/models/responses/response_item_list"
require_relative "openai/models/responses/response_mcp_call_arguments_delta_event"
Expand Down
17 changes: 7 additions & 10 deletions lib/openai/internal/transport/pooled_net_requester.rb
Original file line number Diff line number Diff line change
Expand Up @@ -134,9 +134,9 @@ def execute(request)

# rubocop:disable Metrics/BlockLength
enum = Enumerator.new do |y|
with_pool(url, deadline: deadline) do |conn|
next if finished
next if finished

with_pool(url, deadline: deadline) do |conn|
req, closing = self.class.build_request(request) do
self.class.calibrate_socket_timeout(conn, deadline)
end
Expand All @@ -149,7 +149,7 @@ def execute(request)

self.class.calibrate_socket_timeout(conn, deadline)
conn.request(req) do |rsp|
y << [conn, req, rsp]
y << [req, rsp]
break if finished

rsp.read_body do |bytes|
Expand All @@ -160,6 +160,8 @@ def execute(request)
end
eof = true
end
ensure
conn.finish if !eof && conn&.started?
end
rescue Timeout::Error
raise OpenAI::Errors::APITimeoutError.new(url: url, request: req)
Expand All @@ -168,16 +170,11 @@ def execute(request)
end
# rubocop:enable Metrics/BlockLength

conn, _, response = enum.next
_, response = enum.next
body = OpenAI::Internal::Util.fused_enum(enum, external: true) do
finished = true
tap do
enum.next
rescue StopIteration
nil
end
loop { enum.next }
ensure
conn.finish if !eof && conn&.started?
closing&.call
end
[Integer(response.code), response, body]
Expand Down
17 changes: 0 additions & 17 deletions lib/openai/internal/type/base_stream.rb
Original file line number Diff line number Diff line change
Expand Up @@ -13,21 +13,6 @@ module Type
module BaseStream
include Enumerable

class << self
# Attempt to close the underlying transport when the stream itself is garbage
# collected.
#
# This should not be relied upon for resource clean up, as the garbage collector
# is not guaranteed to run.
#
# @param stream [Enumerable<Object>]
#
# @return [Proc]
#
# @see https://rubyapi.org/3.2/o/objectspace#method-c-define_finalizer
def defer_closing(stream) = ->(_id) { OpenAI::Internal::Util.close_fused!(stream) }
end

# @return [Integer]
attr_reader :status

Expand Down Expand Up @@ -82,8 +67,6 @@ def initialize(model:, url:, status:, headers:, response:, unwrap:, stream:)
@unwrap = unwrap
@stream = stream
@iterator = iterator

ObjectSpace.define_finalizer(self, OpenAI::Internal::Type::BaseStream.defer_closing(@stream))
end

# @api private
Expand Down
2 changes: 2 additions & 0 deletions lib/openai/models.rb
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,8 @@ module OpenAI

BatchRetrieveParams = OpenAI::Models::BatchRetrieveParams

BatchUsage = OpenAI::Models::BatchUsage

Beta = OpenAI::Models::Beta

Chat = OpenAI::Models::Chat
Expand Down
24 changes: 23 additions & 1 deletion lib/openai/models/batch.rb
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,16 @@ class Batch < OpenAI::Internal::Type::BaseModel
# @return [Hash{Symbol=>String}, nil]
optional :metadata, OpenAI::Internal::Type::HashOf[String], nil?: true

# @!attribute model
# Model ID used to process the batch, like `gpt-5-2025-08-07`. OpenAI offers a
# wide range of models with different capabilities, performance characteristics,
# and price points. Refer to the
# [model guide](https://platform.openai.com/docs/models) to browse and compare
# available models.
#
# @return [String, nil]
optional :model, String

# @!attribute output_file_id
# The ID of the file containing the outputs of successfully executed requests.
#
Expand All @@ -127,7 +137,15 @@ class Batch < OpenAI::Internal::Type::BaseModel
# @return [OpenAI::Models::BatchRequestCounts, nil]
optional :request_counts, -> { OpenAI::BatchRequestCounts }

# @!method initialize(id:, completion_window:, created_at:, endpoint:, input_file_id:, status:, cancelled_at: nil, cancelling_at: nil, completed_at: nil, error_file_id: nil, errors: nil, expired_at: nil, expires_at: nil, failed_at: nil, finalizing_at: nil, in_progress_at: nil, metadata: nil, output_file_id: nil, request_counts: nil, object: :batch)
# @!attribute usage
# Represents token usage details including input tokens, output tokens, a
# breakdown of output tokens, and the total tokens used. Only populated on batches
# created after September 7, 2025.
#
# @return [OpenAI::Models::BatchUsage, nil]
optional :usage, -> { OpenAI::BatchUsage }

# @!method initialize(id:, completion_window:, created_at:, endpoint:, input_file_id:, status:, cancelled_at: nil, cancelling_at: nil, completed_at: nil, error_file_id: nil, errors: nil, expired_at: nil, expires_at: nil, failed_at: nil, finalizing_at: nil, in_progress_at: nil, metadata: nil, model: nil, output_file_id: nil, request_counts: nil, usage: nil, object: :batch)
# Some parameter documentations has been truncated, see {OpenAI::Models::Batch}
# for more details.
#
Expand Down Expand Up @@ -165,10 +183,14 @@ class Batch < OpenAI::Internal::Type::BaseModel
#
# @param metadata [Hash{Symbol=>String}, nil] Set of 16 key-value pairs that can be attached to an object. This can be
#
# @param model [String] Model ID used to process the batch, like `gpt-5-2025-08-07`. OpenAI
#
# @param output_file_id [String] The ID of the file containing the outputs of successfully executed requests.
#
# @param request_counts [OpenAI::Models::BatchRequestCounts] The request counts for different statuses within the batch.
#
# @param usage [OpenAI::Models::BatchUsage] Represents token usage details including input tokens, output tokens, a
#
# @param object [Symbol, :batch] The object type, which is always `batch`.

# The current status of the batch.
Expand Down
84 changes: 84 additions & 0 deletions lib/openai/models/batch_usage.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# frozen_string_literal: true

module OpenAI
module Models
class BatchUsage < OpenAI::Internal::Type::BaseModel
# @!attribute input_tokens
# The number of input tokens.
#
# @return [Integer]
required :input_tokens, Integer

# @!attribute input_tokens_details
# A detailed breakdown of the input tokens.
#
# @return [OpenAI::Models::BatchUsage::InputTokensDetails]
required :input_tokens_details, -> { OpenAI::BatchUsage::InputTokensDetails }

# @!attribute output_tokens
# The number of output tokens.
#
# @return [Integer]
required :output_tokens, Integer

# @!attribute output_tokens_details
# A detailed breakdown of the output tokens.
#
# @return [OpenAI::Models::BatchUsage::OutputTokensDetails]
required :output_tokens_details, -> { OpenAI::BatchUsage::OutputTokensDetails }

# @!attribute total_tokens
# The total number of tokens used.
#
# @return [Integer]
required :total_tokens, Integer

# @!method initialize(input_tokens:, input_tokens_details:, output_tokens:, output_tokens_details:, total_tokens:)
# Represents token usage details including input tokens, output tokens, a
# breakdown of output tokens, and the total tokens used. Only populated on batches
# created after September 7, 2025.
#
# @param input_tokens [Integer] The number of input tokens.
#
# @param input_tokens_details [OpenAI::Models::BatchUsage::InputTokensDetails] A detailed breakdown of the input tokens.
#
# @param output_tokens [Integer] The number of output tokens.
#
# @param output_tokens_details [OpenAI::Models::BatchUsage::OutputTokensDetails] A detailed breakdown of the output tokens.
#
# @param total_tokens [Integer] The total number of tokens used.

# @see OpenAI::Models::BatchUsage#input_tokens_details
class InputTokensDetails < OpenAI::Internal::Type::BaseModel
# @!attribute cached_tokens
# The number of tokens that were retrieved from the cache.
# [More on prompt caching](https://platform.openai.com/docs/guides/prompt-caching).
#
# @return [Integer]
required :cached_tokens, Integer

# @!method initialize(cached_tokens:)
# Some parameter documentations has been truncated, see
# {OpenAI::Models::BatchUsage::InputTokensDetails} for more details.
#
# A detailed breakdown of the input tokens.
#
# @param cached_tokens [Integer] The number of tokens that were retrieved from the cache. [More on
end

# @see OpenAI::Models::BatchUsage#output_tokens_details
class OutputTokensDetails < OpenAI::Internal::Type::BaseModel
# @!attribute reasoning_tokens
# The number of reasoning tokens.
#
# @return [Integer]
required :reasoning_tokens, Integer

# @!method initialize(reasoning_tokens:)
# A detailed breakdown of the output tokens.
#
# @param reasoning_tokens [Integer] The number of reasoning tokens.
end
end
end
end
51 changes: 47 additions & 4 deletions lib/openai/models/responses/response_custom_tool_call_output.rb
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,11 @@ class ResponseCustomToolCallOutput < OpenAI::Internal::Type::BaseModel
required :call_id, String

# @!attribute output
# The output from the custom tool call generated by your code.
# The output from the custom tool call generated by your code. Can be a string or
# an list of output content.
#
# @return [String]
required :output, String
# @return [String, Array<OpenAI::Models::Responses::ResponseInputText, OpenAI::Models::Responses::ResponseInputImage, OpenAI::Models::Responses::ResponseInputFile>]
required :output, union: -> { OpenAI::Responses::ResponseCustomToolCallOutput::Output }

# @!attribute type
# The type of the custom tool call output. Always `custom_tool_call_output`.
Expand All @@ -36,11 +37,53 @@ class ResponseCustomToolCallOutput < OpenAI::Internal::Type::BaseModel
#
# @param call_id [String] The call ID, used to map this custom tool call output to a custom tool call.
#
# @param output [String] The output from the custom tool call generated by your code.
# @param output [String, Array<OpenAI::Models::Responses::ResponseInputText, OpenAI::Models::Responses::ResponseInputImage, OpenAI::Models::Responses::ResponseInputFile>] The output from the custom tool call generated by your code.
#
# @param id [String] The unique ID of the custom tool call output in the OpenAI platform.
#
# @param type [Symbol, :custom_tool_call_output] The type of the custom tool call output. Always `custom_tool_call_output`.

# The output from the custom tool call generated by your code. Can be a string or
# an list of output content.
#
# @see OpenAI::Models::Responses::ResponseCustomToolCallOutput#output
module Output
extend OpenAI::Internal::Type::Union

# A string of the output of the custom tool call.
variant String

# Text, image, or file output of the custom tool call.
variant -> { OpenAI::Models::Responses::ResponseCustomToolCallOutput::Output::OutputContentListArray }

# A text input to the model.
module OutputContentList
extend OpenAI::Internal::Type::Union

discriminator :type

# A text input to the model.
variant :input_text, -> { OpenAI::Responses::ResponseInputText }

# An image input to the model. Learn about [image inputs](https://platform.openai.com/docs/guides/vision).
variant :input_image, -> { OpenAI::Responses::ResponseInputImage }

# A file input to the model.
variant :input_file, -> { OpenAI::Responses::ResponseInputFile }

# @!method self.variants
# @return [Array(OpenAI::Models::Responses::ResponseInputText, OpenAI::Models::Responses::ResponseInputImage, OpenAI::Models::Responses::ResponseInputFile)]
end

# @!method self.variants
# @return [Array(String, Array<OpenAI::Models::Responses::ResponseInputText, OpenAI::Models::Responses::ResponseInputImage, OpenAI::Models::Responses::ResponseInputFile>)]

# @type [OpenAI::Internal::Type::Converter]
OutputContentListArray =
OpenAI::Internal::Type::ArrayOf[union: -> {
OpenAI::Responses::ResponseCustomToolCallOutput::Output::OutputContentList
}]
end
end
end
end
Expand Down
Loading