You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Some parameter documentations has been truncated, see
229
236
# {OpenAI::Responses::Response} for more details.
230
237
#
@@ -242,18 +249,20 @@ class Response < OpenAI::Internal::Type::BaseModel
242
249
#
243
250
# @param model [String, Symbol, OpenAI::ChatModel, OpenAI::ResponsesModel::ResponsesOnlyModel] Model ID used to generate the response, like `gpt-4o` or `o3`. OpenAI
244
251
#
245
-
# @param output [Array<OpenAI::Responses::ResponseOutputMessage, OpenAI::Responses::ResponseFileSearchToolCall, OpenAI::Responses::ResponseFunctionToolCall, OpenAI::Responses::ResponseFunctionWebSearch, OpenAI::Responses::ResponseComputerToolCall, OpenAI::Responses::ResponseReasoningItem>] An array of content items generated by the model.
252
+
# @param output [Array<OpenAI::Responses::ResponseOutputMessage, OpenAI::Responses::ResponseFileSearchToolCall, OpenAI::Responses::ResponseFunctionToolCall, OpenAI::Responses::ResponseFunctionWebSearch, OpenAI::Responses::ResponseComputerToolCall, OpenAI::Responses::ResponseReasoningItem, OpenAI::Responses::ResponseOutputItem::ImageGenerationCall, OpenAI::Responses::ResponseCodeInterpreterToolCall, OpenAI::Responses::ResponseOutputItem::LocalShellCall, OpenAI::Responses::ResponseOutputItem::McpCall, OpenAI::Responses::ResponseOutputItem::McpListTools, OpenAI::Responses::ResponseOutputItem::McpApprovalRequest>] An array of content items generated by the model.
246
253
#
247
254
# @param parallel_tool_calls [Boolean] Whether to allow the model to run tool calls in parallel.
248
255
#
249
256
# @param temperature [Float, nil] What sampling temperature to use, between 0 and 2. Higher values like 0.8 will m
250
257
#
251
258
# @param tool_choice [Symbol, OpenAI::Responses::ToolChoiceOptions, OpenAI::Responses::ToolChoiceTypes, OpenAI::Responses::ToolChoiceFunction] How the model should select which tool (or tools) to use when generating
252
259
#
253
-
# @param tools [Array<OpenAI::Responses::FileSearchTool, OpenAI::Responses::FunctionTool, OpenAI::Responses::ComputerTool, OpenAI::Responses::WebSearchTool>] An array of tools the model may call while generating a response. You
260
+
# @param tools [Array<OpenAI::Responses::FunctionTool, OpenAI::Responses::FileSearchTool, OpenAI::Responses::ComputerTool, OpenAI::Responses::Tool::Mcp, OpenAI::Responses::Tool::CodeInterpreter, OpenAI::Responses::Tool::ImageGeneration, OpenAI::Responses::Tool::LocalShell, OpenAI::Responses::WebSearchTool>] An array of tools the model may call while generating a response. You
254
261
#
255
262
# @param top_p [Float, nil] An alternative to sampling with temperature, called nucleus sampling,
256
263
#
264
+
# @param background [Boolean, nil] Whether to run the model response in the background.
265
+
#
257
266
# @param max_output_tokens [Integer, nil] An upper bound for the number of tokens that can be generated for a response, in
258
267
#
259
268
# @param previous_response_id [String, nil] The unique ID of the previous response to the model. Use this to
0 commit comments