Skip to content

Commit f5826c7

Browse files
committed
fix
1 parent 7d12da2 commit f5826c7

File tree

1 file changed

+4
-3
lines changed

1 file changed

+4
-3
lines changed

articles/ai-studio/reference/reference-model-inference-chat-completions.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ POST /chat/completions?api-version=2024-04-01-preview
4343
| stop | | | Sequences where the API will stop generating further tokens. |
4444
| stream | | boolean | If set, partial message deltas will be sent. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. |
4545
| temperature | | number | Non-negative number. Return 422 if value is unsupported by model. |
46-
| tool\_choice | | ChatCompletionToolChoiceOption(#chatcompletiontoolchoiceoption) | Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present. Returns a 422 error if the tool is not supported by the model. |
46+
| tool\_choice | | [ChatCompletionToolChoiceOption](#chatcompletiontoolchoiceoption) | Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present. Returns a 422 error if the tool is not supported by the model. |
4747
| tools | | [ChatCompletionTool](#chatcompletiontool)\[\] | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. Returns a 422 error if the tool is not supported by the model. |
4848
| top\_p | | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top\_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br><br>We generally recommend altering this or `temperature` but not both. |
4949

@@ -153,14 +153,15 @@ Status code: 200
153153
| [ChatCompletionRequestMessage](#chatcompletionrequestmessage) | |
154154
| [ChatCompletionMessageContentPart](#chatcompletionmessagecontentpart) | |
155155
| [ChatCompletionMessageContentPartType](#chatcompletionmessagecontentparttype) | |
156+
| [ChatCompletionToolChoiceOption](#chatcompletiontoolchoiceoption) | Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present. Returns a 422 error if the tool is not supported by the model. |
156157
| [ChatCompletionFinishReason](#chatcompletionfinishreason) | The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool. |
157158
| [ChatCompletionMessageToolCall](#chatcompletionmessagetoolcall) | |
158159
| [ChatCompletionObject](#chatcompletionobject) | The object type, which is always `chat.completion`. |
159160
| [ChatCompletionResponseFormat](#chatcompletionresponseformat) | |
160161
| [ChatCompletionResponseMessage](#chatcompletionresponsemessage) | A chat completion message generated by the model. |
161162
| [ChatCompletionTool](#chatcompletiontool) | |
162163
| [ChatMessageRole](#chatmessagerole) | The role of the author of this message. |
163-
| [Choices](#choices) | A list of chat completion choices. Can be more than one if `n` is greater than 1. |
164+
| [Choices](#choices) | A list of chat completion choices. |
164165
| [CompletionUsage](#completionusage) | Usage statistics for the completion request. |
165166
| [ContentFilterError](#contentfiltererror) | The API call fails when the prompt triggers a content filter as configured. Modify the prompt and try again. |
166167
| [CreateChatCompletionRequest](#createchatcompletionrequest) | |
@@ -293,7 +294,7 @@ The API call fails when the prompt triggers a content filter as configured. Modi
293294
| stop | | | Sequences where the API will stop generating further tokens. |
294295
| stream | boolean | False | If set, partial message deltas will be sent. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. |
295296
| temperature | number | 1 | Non-negative number. Return 422 if value is unsupported by model. |
296-
| tool\_choice | ChatCompletionToolChoiceOption(#chatcompletiontoolchoiceoption) | | Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present. Returns a 422 error if the tool is not supported by the model. |
297+
| tool\_choice | [ChatCompletionToolChoiceOption](#chatcompletiontoolchoiceoption) | | Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present. Returns a 422 error if the tool is not supported by the model. |
297298
| tools | [ChatCompletionTool](#chatcompletiontool)\[\] | | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. Returns a 422 error if the tool is not supported by the model. |
298299
| top\_p | number | 1 | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top\_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br><br>We generally recommend altering this or `temperature` but not both. |
299300

0 commit comments

Comments
 (0)