You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
/// Create an assistant with a model and instructions.
173
173
/// </summary>
174
+
/// <param name="description">
175
+
/// The description of the assistant. The maximum length is 512 characters.
176
+
/// </param>
177
+
/// <param name="instructions">
178
+
/// The system instructions that the assistant uses. The maximum length is 256,000 characters.
179
+
/// </param>
180
+
/// <param name="metadata">
181
+
/// Set of 16 key-value pairs that can be attached to an object. This can be<br/>
182
+
/// useful for storing additional information about the object in a structured<br/>
183
+
/// format, and querying for objects via API or the dashboard. <br/>
184
+
/// Keys are strings with a maximum length of 64 characters. Values are strings<br/>
185
+
/// with a maximum length of 512 characters.
186
+
/// </param>
174
187
/// <param name="model">
175
188
/// ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models) for descriptions of them.<br/>
176
189
/// Example: gpt-4o
177
190
/// </param>
178
191
/// <param name="name">
179
192
/// The name of the assistant. The maximum length is 256 characters.
180
193
/// </param>
181
-
/// <param name="description">
182
-
/// The description of the assistant. The maximum length is 512 characters.
183
-
/// </param>
184
-
/// <param name="instructions">
185
-
/// The system instructions that the assistant uses. The maximum length is 256,000 characters.
/// A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `file_search`, or `function`.
198
-
/// </param>
199
-
/// <param name="toolResources">
200
-
/// A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
201
-
/// </param>
202
-
/// <param name="metadata">
203
-
/// Set of 16 key-value pairs that can be attached to an object. This can be<br/>
204
-
/// useful for storing additional information about the object in a structured<br/>
205
-
/// format, and querying for objects via API or the dashboard. <br/>
206
-
/// Keys are strings with a maximum length of 64 characters. Values are strings<br/>
207
-
/// with a maximum length of 512 characters.
203
+
/// <param name="responseFormat">
204
+
/// Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models#gpt-4o), [GPT-4 Turbo](/docs/models#gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.<br/>
205
+
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
206
+
/// Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is valid JSON.<br/>
207
+
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
208
208
/// </param>
209
209
/// <param name="temperature">
210
210
/// What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.<br/>
211
211
/// Default Value: 1<br/>
212
212
/// Example: 1
213
213
/// </param>
214
+
/// <param name="toolResources">
215
+
/// A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
216
+
/// </param>
217
+
/// <param name="tools">
218
+
/// A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `file_search`, or `function`.
219
+
/// </param>
214
220
/// <param name="topP">
215
221
/// An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br/>
216
222
/// We generally recommend altering this or temperature but not both.<br/>
217
223
/// Default Value: 1<br/>
218
224
/// Example: 1
219
225
/// </param>
220
-
/// <param name="responseFormat">
221
-
/// Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models#gpt-4o), [GPT-4 Turbo](/docs/models#gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.<br/>
222
-
/// Setting to `{ "type": "json_schema", "json_schema": {...} }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](/docs/guides/structured-outputs).<br/>
223
-
/// Setting to `{ "type": "json_object" }` enables JSON mode, which ensures the message the model generates is valid JSON.<br/>
224
-
/// **Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
225
-
/// </param>
226
226
/// <param name="cancellationToken">The token to cancel the operation with</param>
0 commit comments