You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: specification/base/typespec/common/models.tsp
+14Lines changed: 14 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -249,6 +249,9 @@ model ModelResponsePropertiesForRequest {
249
249
@minValue(0)
250
250
@maxValue(2)
251
251
temperature?:float32 | null=1;
252
+
253
+
/** An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. */
254
+
top_logprobs?:int32 | null;
252
255
253
256
@doc("""
254
257
An alternative to sampling with temperature, called nucleus sampling,
@@ -265,6 +268,10 @@ model ModelResponsePropertiesForRequest {
265
268
/** A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices#end-user-ids). */
266
269
user?:string;
267
270
271
+
/**A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies.
272
+
The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. [Learn more](https://platform.openai.com/docs/guides/safety-best-practices#safety-identifiers).*/
273
+
safety_identifier?:string;
274
+
268
275
service_tier?:ServiceTier;
269
276
}
270
277
modelModelResponsePropertiesForResponse {
@@ -278,6 +285,9 @@ model ModelResponsePropertiesForResponse {
278
285
@maxValue(2)
279
286
temperature:float32 | null;
280
287
288
+
/** An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. */
289
+
top_logprobs?:int32 | null;
290
+
281
291
@doc("""
282
292
An alternative to sampling with temperature, called nucleus sampling,
283
293
where the model considers the results of the tokens with top_p probability
@@ -293,6 +303,10 @@ model ModelResponsePropertiesForResponse {
293
303
/** A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices#end-user-ids). */
294
304
user:string | null;
295
305
306
+
/**A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies.
307
+
The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. [Learn more](https://platform.openai.com/docs/guides/safety-best-practices#safety-identifiers).*/
Copy file name to clipboardExpand all lines: specification/base/typespec/responses/models.tsp
+25Lines changed: 25 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -87,6 +87,25 @@ model CreateResponse {
87
87
* for more information.
88
88
*/
89
89
stream?:boolean | null=false;
90
+
91
+
/** The conversation that this response belongs to.
92
+
* Items from this conversation are prepended to input_items for this response request.
93
+
* Input items and output items from this response are automatically added to this conversation after this response completes. */
94
+
conversation?:ConversationParam | null;
95
+
}
96
+
97
+
/**The conversation that this response belongs to. Items from this conversation are prepended to `input_items` for this response request.
98
+
Input items and output items from this response are automatically added to this conversation after this response completes.*/
99
+
unionConversationParam {
100
+
string,
101
+
`ConversationParam-2`,
102
+
}
103
+
104
+
/** The conversation that this response belongs to. */
105
+
@summary("Conversation object")
106
+
model`ConversationParam-2` {
107
+
/** The unique ID of the conversation. */
108
+
id:string;
90
109
}
91
110
92
111
modelResponse {
@@ -149,6 +168,9 @@ model Response {
149
168
150
169
/** Whether to allow the model to run tool calls in parallel. */
151
170
parallel_tool_calls:boolean=true;
171
+
172
+
/** The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation. */
173
+
conversation?:`ConversationParam-2` | null;
152
174
}
153
175
154
176
modelResponseProperties {
@@ -178,6 +200,9 @@ model ResponseProperties {
178
200
/** An upper bound for the number of tokens that can be generated for a response, including visible output tokens and [reasoning tokens](/docs/guides/reasoning). */
179
201
max_output_tokens?:int32 | null;
180
202
203
+
/** The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored. */
204
+
max_tool_calls?:int32 | null;
205
+
181
206
@doc("""
182
207
Inserts a system (or developer) message as the first item in the model's context.
0 commit comments