Skip to content

Commit 7d12da2

Browse files
committed
tooling
1 parent 3cc9f0c commit 7d12da2

File tree

2 files changed

+18
-7
lines changed

2 files changed

+18
-7
lines changed

articles/ai-studio/reference/reference-model-inference-chat-completions.md

Lines changed: 13 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ POST /chat/completions?api-version=2024-04-01-preview
4343
| stop | | | Sequences where the API will stop generating further tokens. |
4444
| stream | | boolean | If set, partial message deltas will be sent. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. |
4545
| temperature | | number | Non-negative number. Return 422 if value is unsupported by model. |
46-
| tool\_choice | | ChatCompletionToolChoiceOption | Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present. Returns a 422 error if the tool is not supported by the model. |
46+
| tool\_choice | | ChatCompletionToolChoiceOption(#chatcompletiontoolchoiceoption) | Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present. Returns a 422 error if the tool is not supported by the model. |
4747
| tools | | [ChatCompletionTool](#chatcompletiontool)\[\] | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. Returns a 422 error if the tool is not supported by the model. |
4848
| top\_p | | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top\_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br><br>We generally recommend altering this or `temperature` but not both. |
4949

@@ -293,7 +293,7 @@ The API call fails when the prompt triggers a content filter as configured. Modi
293293
| stop | | | Sequences where the API will stop generating further tokens. |
294294
| stream | boolean | False | If set, partial message deltas will be sent. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. |
295295
| temperature | number | 1 | Non-negative number. Return 422 if value is unsupported by model. |
296-
| tool\_choice | ChatCompletionToolChoiceOption | | Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present. Returns a 422 error if the tool is not supported by the model. |
296+
| tool\_choice | ChatCompletionToolChoiceOption(#chatcompletiontoolchoiceoption) | | Controls which (if any) function is called by the model. `none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that function.<br><br>`none` is the default when no functions are present. `auto` is the default if functions are present. Returns a 422 error if the tool is not supported by the model. |
297297
| tools | [ChatCompletionTool](#chatcompletiontool)\[\] | | A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. Returns a 422 error if the tool is not supported by the model. |
298298
| top\_p | number | 1 | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top\_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.<br><br>We generally recommend altering this or `temperature` but not both. |
299299

@@ -321,6 +321,17 @@ The API call fails when the prompt triggers a content filter as configured. Modi
321321
| image | string | |
322322
| image_url | string | |
323323

324+
### ChatCompletionToolChoiceOption
325+
326+
Controls which (if any) tool is called by the model.
327+
328+
| Name | Type | Description |
329+
| --- | --- | --- |
330+
| none | string | The model will not call any tool and instead generates a message. |
331+
| auto | string | The model can pick between generating a message or calling one or more tools. |
332+
| required | string | The model must call one or more tools. |
333+
| | string | Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool. |
334+
324335
### ImageDetail
325336

326337
Specifies the detail level of the image.

articles/ai-studio/reference/reference-model-inference-completions.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -50,11 +50,11 @@ POST /completions?api-version=2024-04-01-preview
5050
| Name | Type | Description |
5151
| --- | --- | --- |
5252
| 200 OK | [CreateCompletionResponse](#createcompletionresponse) | OK |
53-
| 401 Unauthorized | | Access token is missing or invalid |
54-
| 404 Not Found | | Modality not supported by the model. Check the documentation of the model to see which routes are available. |
55-
| 422 Unprocessable Entity | [UnprocessableContentError](#unprocessablecontenterror) | The request contains unprocessable content<br><br>Headers<br><br>x-ms-error-code: string |
56-
| 429 Too Many Requests | | You have hit your assigned rate limit and your request need to be paced. |
57-
| Other Status Codes | [ContentFilterError](#contentfiltererror) | Bad request<br><br>Headers<br><br>x-ms-error-code: string |
53+
| 401 Unauthorized | [UnauthorizedError](#unauthorizederror) | Access token is missing or invalid<br><br>Headers<br><br>x-ms-error-code: string |
54+
| 404 Not Found | [NotFoundError](#notfounderror) | Modality not supported by the model. Check the documentation of the model to see which routes are available.<br><br>Headers<br><br>x-ms-error-code: string |
55+
| 422 Unprocessable Entity | [UnprocessableContentError](#unprocessablecontenterror) | The request contains unprocessable content<br><br>Headers<br><br>x-ms-error-code: string |
56+
| 429 Too Many Requests | [TooManyRequestsError](#toomanyrequestserror) | You have hit your assigned rate limit and your request need to be paced.<br><br>Headers<br><br>x-ms-error-code: string |
57+
| Other Status Codes | [ContentFilterError](#contentfiltererror) | Bad request<br><br>Headers<br><br>x-ms-error-code: string |
5858

5959

6060
## Security

0 commit comments

Comments
 (0)