Skip to content

Commit 8013824

Browse files
Merge pull request #390 from santiagxf/santiagxf-patch-2
Update reference-model-inference-chat-completions.md
2 parents d598100 + af87cc1 commit 8013824

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

articles/ai-studio/reference/reference-model-inference-chat-completions.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,7 @@ POST /chat/completions?api-version=2024-04-01-preview
4242

4343
| Name | Required | Type | Description |
4444
| --- | --- | --- | --- |
45+
| model | | string | The model name. This parameter is ignored if the endpoint serves only one model. |
4546
| messages | True | [ChatCompletionRequestMessage](#chatcompletionrequestmessage) | A list of messages comprising the conversation so far. Returns a 422 error if at least some of the messages can't be understood by the model. |
4647
| frequency\_penalty | | number | Helps prevent word repetitions by reducing the chance of a word being selected if it has already been used. The higher the frequency penalty, the less likely the model is to repeat the same words in its output. Return a 422 error if value or parameter is not supported by model. |
4748
| max\_tokens | | integer | The maximum number of tokens that can be generated in the chat completion.<br><br>The total length of input tokens and generated tokens is limited by the model's context length. Passing null causes the model to use its max context length. |

0 commit comments

Comments
 (0)