Skip to content

Commit 996b16f

Browse files
authored
Update reference-model-inference-chat-completions.md
1 parent 5fe10be commit 996b16f

File tree

1 file changed

+10
-1
lines changed

1 file changed

+10
-1
lines changed

articles/ai-studio/reference/reference-model-inference-chat-completions.md

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -121,7 +121,7 @@ POST /chat/completions?api-version=2024-04-01-preview
121121
"stream": false,
122122
"temperature": 0,
123123
"top_p": 1,
124-
"response_format": "text"
124+
"response_format": { "type": "text" }
125125
}
126126
```
127127

@@ -166,6 +166,7 @@ Status code: 200
166166
| [ChatCompletionMessageToolCall](#chatcompletionmessagetoolcall) | |
167167
| [ChatCompletionObject](#chatcompletionobject) | The object type, which is always `chat.completion`. |
168168
| [ChatCompletionResponseFormat](#chatcompletionresponseformat) | The response format for the model response. Setting to `json_object` enables JSON mode, which guarantees the message the model generates is valid JSON. When using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length. |
169+
| [ChatCompletionResponseFormatType](#chatcompletionresponseformatrype) | The response format type. |
169170
| [ChatCompletionResponseMessage](#chatcompletionresponsemessage) | A chat completion message generated by the model. |
170171
| [ChatCompletionTool](#chatcompletiontool) | |
171172
| [ChatMessageRole](#chatmessagerole) | The role of the author of this message. |
@@ -219,6 +220,14 @@ The object type, which is always `chat.completion`.
219220

220221
The response format for the model response. Setting to `json_object` enables JSON mode, which guarantees the message the model generates is valid JSON. When using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
221222

223+
| Name | Type | Description |
224+
| --- | --- | --- |
225+
| type | [ChatCompletionResponseFormatType](#chatcompletionresponseformatrype) | The response format type. |
226+
227+
### ChatCompletionResponseFormatType
228+
229+
The response format type.
230+
222231
| Name | Type | Description |
223232
| --- | --- | --- |
224233
| json\_object | string | |

0 commit comments

Comments
 (0)