You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Some parameter documentations has been truncated, see
88
99
# {OpenAI::Models::Audio::TranscriptionCreateParams} for more details.
89
100
#
90
101
# @param file [Pathname, StringIO, IO, OpenAI::FilePart] The audio file object (not file name) to transcribe, in one of these formats: fl
91
102
#
92
103
# @param model [String, Symbol, OpenAI::AudioModel] ID of the model to use. The options are `gpt-4o-transcribe`, `gpt-4o-mini-transc
93
104
#
105
+
# @param chunking_strategy [Symbol, :auto, OpenAI::Audio::TranscriptionCreateParams::ChunkingStrategy::VadConfig, nil] Controls how the audio is cut into chunks. When set to `"auto"`, the server firs
106
+
#
94
107
# @param include [Array<Symbol, OpenAI::Audio::TranscriptionInclude>] Additional information to include in the transcription response.
95
108
#
96
109
# @param language [String] The language of the input audio. Supplying the input language in [ISO-639-1](htt
@@ -124,6 +137,86 @@ module Model
124
137
end
125
138
end
126
139
140
+
# Controls how the audio is cut into chunks. When set to `"auto"`, the server
141
+
# first normalizes loudness and then uses voice activity detection (VAD) to choose
142
+
# boundaries. `server_vad` object can be provided to tweak VAD detection
143
+
# parameters manually. If unset, the audio is transcribed as a single block.
144
+
moduleChunkingStrategy
145
+
extendOpenAI::Internal::Type::Union
146
+
147
+
# Automatically set chunking parameters based on the audio. Must be set to `"auto"`.
# Some parameter documentations has been truncated, see
186
+
# {OpenAI::Audio::TranscriptionCreateParams::ChunkingStrategy::VadConfig} for more
187
+
# details.
188
+
#
189
+
# @param type [Symbol, OpenAI::Audio::TranscriptionCreateParams::ChunkingStrategy::VadConfig::Type] Must be set to `server_vad` to enable manual chunking using server side VAD.
190
+
#
191
+
# @param prefix_padding_ms [Integer] Amount of audio to include before the VAD detected speech (in
192
+
#
193
+
# @param silence_duration_ms [Integer] Duration of silence to detect speech stop (in milliseconds).
194
+
#
195
+
# @param threshold [Float] Sensitivity threshold (0.0 to 1.0) for voice activity detection. A
196
+
197
+
# Must be set to `server_vad` to enable manual chunking using server side VAD.
# Some parameter documentations has been truncated, see
@@ -196,7 +194,7 @@ class ThreadCreateAndRunParams < OpenAI::Internal::Type::BaseModel
196
194
#
197
195
# @param top_p [Float, nil] An alternative to sampling with temperature, called nucleus sampling, where the
198
196
#
199
-
# @param truncation_strategy [OpenAI::Beta::ThreadCreateAndRunParams::TruncationStrategy, nil] Controls for how a thread will be truncated prior to the run. Use this to contro
197
+
# @param truncation_strategy [OpenAI::Beta::TruncationObject, nil] Controls for how a thread will be truncated prior to the run. Use this to contro
# The number of most recent messages from the thread when constructing the context
724
-
# for the run.
725
-
#
726
-
# @return [Integer, nil]
727
-
optional:last_messages,Integer,nil?: true
728
-
729
-
# @!method initialize(type:, last_messages: nil)
730
-
# Some parameter documentations has been truncated, see
731
-
# {OpenAI::Beta::ThreadCreateAndRunParams::TruncationStrategy} for more details.
732
-
#
733
-
# Controls for how a thread will be truncated prior to the run. Use this to
734
-
# control the intial context window of the run.
735
-
#
736
-
# @param type [Symbol, OpenAI::Beta::ThreadCreateAndRunParams::TruncationStrategy::Type] The truncation strategy to use for the thread. The default is `auto`. If set to
737
-
#
738
-
# @param last_messages [Integer, nil] The number of most recent messages from the thread when constructing the context
739
-
740
-
# The truncation strategy to use for the thread. The default is `auto`. If set to
741
-
# `last_messages`, the thread will be truncated to the n most recent messages in
742
-
# the thread. When set to `auto`, messages in the middle of the thread will be
743
-
# dropped to fit the context length of the model, `max_prompt_tokens`.
# Usage statistics related to the run. This value will be `null` if the run is not
@@ -270,7 +270,7 @@ class Run < OpenAI::Internal::Type::BaseModel
270
270
#
271
271
# @param tools [Array<OpenAI::Beta::CodeInterpreterTool, OpenAI::Beta::FileSearchTool, OpenAI::Beta::FunctionTool>] The list of tools that the [assistant](https://platform.openai.com/docs/api-refe
272
272
#
273
-
# @param truncation_strategy [OpenAI::Beta::Threads::Run::TruncationStrategy, nil] Controls for how a thread will be truncated prior to the run. Use this to contro
273
+
# @param truncation_strategy [OpenAI::Beta::TruncationObject, nil] Controls for how a thread will be truncated prior to the run. Use this to contro
274
274
#
275
275
# @param usage [OpenAI::Beta::Threads::Run::Usage, nil] Usage statistics related to the run. This value will be `null` if the run is not
276
276
#
@@ -392,52 +392,6 @@ class SubmitToolOutputs < OpenAI::Internal::Type::BaseModel
# The number of most recent messages from the thread when constructing the context
408
-
# for the run.
409
-
#
410
-
# @return [Integer, nil]
411
-
optional:last_messages,Integer,nil?: true
412
-
413
-
# @!method initialize(type:, last_messages: nil)
414
-
# Some parameter documentations has been truncated, see
415
-
# {OpenAI::Beta::Threads::Run::TruncationStrategy} for more details.
416
-
#
417
-
# Controls for how a thread will be truncated prior to the run. Use this to
418
-
# control the intial context window of the run.
419
-
#
420
-
# @param type [Symbol, OpenAI::Beta::Threads::Run::TruncationStrategy::Type] The truncation strategy to use for the thread. The default is `auto`. If set to
421
-
#
422
-
# @param last_messages [Integer, nil] The number of most recent messages from the thread when constructing the context
423
-
424
-
# The truncation strategy to use for the thread. The default is `auto`. If set to
425
-
# `last_messages`, the thread will be truncated to the n most recent messages in
426
-
# the thread. When set to `auto`, messages in the middle of the thread will be
427
-
# dropped to fit the context length of the model, `max_prompt_tokens`.
0 commit comments