You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-studio/reference/reference-model-inference-chat-completions.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -43,7 +43,7 @@ POST /chat/completions?api-version=2024-04-01-preview
43
43
| stop ||| Sequences where the API will stop generating further tokens. |
44
44
| stream || boolean | If set, partial message deltas will be sent. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. |
45
45
| temperature || number | Non-negative number. Return 422 if value is unsupported by model. |
46
-
| tool\_choice || ChatCompletionToolChoiceOption(#chatcompletiontoolchoiceoption) | Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present. Returns a 422 error if the tool is not supported by the model. |
46
+
| tool\_choice ||[ChatCompletionToolChoiceOption](#chatcompletiontoolchoiceoption)| Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present. Returns a 422 error if the tool is not supported by the model. |
47
47
| tools ||[ChatCompletionTool](#chatcompletiontool)\[\]| A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. Returns a 422 error if the tool is not supported by the model. |
48
48
| top\_p || number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top\_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br><br>We generally recommend altering this or `temperature` but not both. |
|[ChatCompletionToolChoiceOption](#chatcompletiontoolchoiceoption)| Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present. Returns a 422 error if the tool is not supported by the model. |
156
157
|[ChatCompletionFinishReason](#chatcompletionfinishreason)| The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool. |
|[ChatCompletionResponseMessage](#chatcompletionresponsemessage)| A chat completion message generated by the model. |
161
162
|[ChatCompletionTool](#chatcompletiontool)||
162
163
|[ChatMessageRole](#chatmessagerole)| The role of the author of this message. |
163
-
|[Choices](#choices)| A list of chat completion choices. Can be more than one if `n` is greater than 1. |
164
+
|[Choices](#choices)| A list of chat completion choices. |
164
165
|[CompletionUsage](#completionusage)| Usage statistics for the completion request. |
165
166
|[ContentFilterError](#contentfiltererror)| The API call fails when the prompt triggers a content filter as configured. Modify the prompt and try again. |
@@ -293,7 +294,7 @@ The API call fails when the prompt triggers a content filter as configured. Modi
293
294
| stop ||| Sequences where the API will stop generating further tokens. |
294
295
| stream | boolean | False | If set, partial message deltas will be sent. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. |
295
296
| temperature | number | 1 | Non-negative number. Return 422 if value is unsupported by model. |
296
-
| tool\_choice | ChatCompletionToolChoiceOption(#chatcompletiontoolchoiceoption) || Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present. Returns a 422 error if the tool is not supported by the model. |
297
+
| tool\_choice |[ChatCompletionToolChoiceOption](#chatcompletiontoolchoiceoption)|| Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present. Returns a 422 error if the tool is not supported by the model. |
297
298
| tools |[ChatCompletionTool](#chatcompletiontool)\[\]|| A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. Returns a 422 error if the tool is not supported by the model. |
298
299
| top\_p | number | 1 | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top\_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br><br>We generally recommend altering this or `temperature` but not both. |
0 commit comments