Skip to content

Commit d08fd40

Browse files
authored
Merge pull request #278098 from MicrosoftDocs/main
6/13 11:00 AM IST Publish
2 parents bc31b1d + 6e0e420 commit d08fd40

File tree

94 files changed

+1190
-558
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

94 files changed

+1190
-558
lines changed

articles/ai-studio/reference/reference-model-inference-api.md

Lines changed: 46 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ While foundational models excel in specific domains, they lack a uniform set of
3232
> * Use smaller models that can run faster on specific tasks.
3333
> * Compose multiple models to develop intelligent experiences.
3434
35-
Having a uniform way to consume foundational models allow developers to realize all those benefits without changing a single line of code on their applications.
35+
Having a uniform way to consume foundational models allow developers to realize all those benefits without sacrificing portability or changing the underlying code.
3636

3737
## Availability
3838

@@ -43,8 +43,8 @@ Models deployed to [serverless API endpoints](../how-to/deploy-models-serverless
4343
> [!div class="checklist"]
4444
> * [Cohere Embed V3](../how-to/deploy-models-cohere-embed.md) family of models
4545
> * [Cohere Command R](../how-to/deploy-models-cohere-command.md) family of models
46-
> * [Meta Llama 2](../how-to/deploy-models-llama.md) family of models
47-
> * [Meta Llama 3](../how-to/deploy-models-llama.md) family of models
46+
> * [Meta Llama 2 chat](../how-to/deploy-models-llama.md) family of models
47+
> * [Meta Llama 3 instruct](../how-to/deploy-models-llama.md) family of models
4848
> * [Mistral-Small](../how-to/deploy-models-mistral.md)
4949
> * [Mistral-Large](../how-to/deploy-models-mistral.md)
5050
> * [Phi-3](../how-to/deploy-models-phi-3.md) family of models
@@ -154,6 +154,49 @@ __Response__
154154
> [!TIP]
155155
> You can inspect the property `details.loc` to understand the location of the offending parameter and `details.input` to see the value that was passed in the request.
156156
157+
## Content safety
158+
159+
The Azure AI model inference API supports [Azure AI Content Safety](../concepts/content-filtering.md). When using deployments with Azure AI Content Safety on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
160+
161+
The following example shows the response for a chat completion request that has triggered content safety.
162+
163+
__Request__
164+
165+
```HTTP/1.1
166+
POST /chat/completions?api-version=2024-04-01-preview
167+
Authorization: Bearer <bearer-token>
168+
Content-Type: application/json
169+
```
170+
171+
```JSON
172+
{
173+
"messages": [
174+
{
175+
"role": "system",
176+
"content": "You are a helpful assistant"
177+
},
178+
{
179+
"role": "user",
180+
"content": "Chopping tomatoes and cutting them into cubes or wedges are great ways to practice your knife skills."
181+
}
182+
],
183+
"temperature": 0,
184+
"top_p": 1,
185+
}
186+
```
187+
188+
__Response__
189+
190+
```JSON
191+
{
192+
"status": 400,
193+
"code": "content_filter",
194+
"message": "The response was filtered",
195+
"param": "messages",
196+
"type": null
197+
}
198+
```
199+
157200
## Getting started
158201

159202
The Azure AI Model Inference API is currently supported in models deployed as [Serverless API endpoints](../how-to/deploy-models-serverless.md). Deploy any of the [supported models](#availability) to a new [Serverless API endpoints](../how-to/deploy-models-serverless.md) to get started. Then you can consume the API in the following ways:

articles/ai-studio/reference/reference-model-inference-chat-completions.md

Lines changed: 31 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,14 @@ POST /chat/completions?api-version=2024-04-01-preview
3030
| --- | --- | --- | --- | --- |
3131
| api-version | query | True | string | The version of the API in the format "YYYY-MM-DD" or "YYYY-MM-DD-preview". |
3232

33+
## Request Header
34+
35+
36+
| Name | Required | Type | Description |
37+
| --- | --- | --- | --- |
38+
| extra-parameters | | string | The behavior of the API when extra parameters are indicated in the payload. Using `allow` makes the API to pass the parameter to the underlying model. Use this value when you want to pass parameters that you know the underlying model can support. Using `drop` makes the API to drop any unsupported parameter. Use this value when you need to use the same payload across different models, but one of the extra parameters may make a model to error out if not supported. Using `error` makes the API to reject any extra parameter in the payload. Only parameters specified in this API can be indicated, or a 400 error is returned. |
39+
| azureml-model-deployment | | string | Name of the deployment you want to route the request to. Supported for endpoints that support multiple deployments. |
40+
3341
## Request Body
3442

3543
| Name | Required | Type | Description |
@@ -113,7 +121,7 @@ POST /chat/completions?api-version=2024-04-01-preview
113121
"stream": false,
114122
"temperature": 0,
115123
"top_p": 1,
116-
"response_format": "text"
124+
"response_format": { "type": "text" }
117125
}
118126
```
119127

@@ -157,7 +165,8 @@ Status code: 200
157165
| [ChatCompletionFinishReason](#chatcompletionfinishreason) | The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool. |
158166
| [ChatCompletionMessageToolCall](#chatcompletionmessagetoolcall) | |
159167
| [ChatCompletionObject](#chatcompletionobject) | The object type, which is always `chat.completion`. |
160-
| [ChatCompletionResponseFormat](#chatcompletionresponseformat) | |
168+
| [ChatCompletionResponseFormat](#chatcompletionresponseformat) | The response format for the model response. Setting to `json_object` enables JSON mode, which guarantees the message the model generates is valid JSON. When using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length. |
169+
| [ChatCompletionResponseFormatType](#chatcompletionresponseformattype) | The response format type. |
161170
| [ChatCompletionResponseMessage](#chatcompletionresponsemessage) | A chat completion message generated by the model. |
162171
| [ChatCompletionTool](#chatcompletiontool) | |
163172
| [ChatMessageRole](#chatmessagerole) | The role of the author of this message. |
@@ -166,15 +175,15 @@ Status code: 200
166175
| [ContentFilterError](#contentfiltererror) | The API call fails when the prompt triggers a content filter as configured. Modify the prompt and try again. |
167176
| [CreateChatCompletionRequest](#createchatcompletionrequest) | |
168177
| [CreateChatCompletionResponse](#createchatcompletionresponse) | Represents a chat completion response returned by model, based on the provided input. |
169-
| [Detail](#detail) | |
178+
| [Detail](#detail) | Details for the [UnprocessableContentError](#unprocessablecontenterror) error. |
170179
| [Function](#function) | The function that the model called. |
171-
| [FunctionObject](#functionobject) | |
180+
| [FunctionObject](#functionobject) | Definition of a function the model has access to. |
172181
| [ImageDetail](#imagedetail) | Specifies the detail level of the image. |
173-
| [NotFoundError](#notfounderror) | |
182+
| [NotFoundError](#notfounderror) | The route is not valid for the deployed model. |
174183
| [ToolType](#tooltype) | The type of the tool. Currently, only `function` is supported. |
175-
| [TooManyRequestsError](#toomanyrequestserror) | |
176-
| [UnauthorizedError](#unauthorizederror) | |
177-
| [UnprocessableContentError](#unprocessablecontenterror) | |
184+
| [TooManyRequestsError](#toomanyrequestserror) | You have hit your assigned rate limit and your requests need to be paced. |
185+
| [UnauthorizedError](#unauthorizederror) | Authentication is missing or invalid. |
186+
| [UnprocessableContentError](#unprocessablecontenterror) | The request contains unprocessable content. The error is returned when the payload indicated is valid according to this specification. However, some of the instructions indicated in the payload are not supported by the underlying model. Use the `details` section to understand the offending parameter. |
178187

179188

180189
### ChatCompletionFinishReason
@@ -209,6 +218,15 @@ The object type, which is always `chat.completion`.
209218

210219
### ChatCompletionResponseFormat
211220

221+
The response format for the model response. Setting to `json_object` enables JSON mode, which guarantees the message the model generates is valid JSON. When using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
222+
223+
| Name | Type | Description |
224+
| --- | --- | --- |
225+
| type | [ChatCompletionResponseFormatType](#chatcompletionresponseformattype) | The response format type. |
226+
227+
### ChatCompletionResponseFormatType
228+
229+
The response format type.
212230

213231
| Name | Type | Description |
214232
| --- | --- | --- |
@@ -237,7 +255,6 @@ A chat completion message generated by the model.
237255

238256
The role of the author of this message.
239257

240-
241258
| Name | Type | Description |
242259
| --- | --- | --- |
243260
| assistant | string | |
@@ -249,7 +266,6 @@ The role of the author of this message.
249266

250267
A list of chat completion choices. Can be more than one if `n` is greater than 1.
251268

252-
253269
| Name | Type | Description |
254270
| --- | --- | --- |
255271
| finish\_reason | [ChatCompletionFinishReason](#chatcompletionfinishreason) | The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool. |
@@ -282,7 +298,6 @@ The API call fails when the prompt triggers a content filter as configured. Modi
282298

283299
### CreateChatCompletionRequest
284300

285-
286301
| Name | Type | Default Value | Description |
287302
| --- | --- | --- | --- |
288303
| frequency\_penalty | number | 0 | Helps prevent word repetitions by reducing the chance of a word being selected if it has already been used. The higher the frequency penalty, the less likely the model is to repeat the same words in its output. Return a 422 error if value or parameter is not supported by model. |
@@ -348,7 +363,6 @@ Specifies the detail level of the image.
348363

349364
Represents a chat completion response returned by model, based on the provided input.
350365

351-
352366
| Name | Type | Description |
353367
| --- | --- | --- |
354368
| choices | [Choices](#choices)\[\] | A list of chat completion choices. Can be more than one if `n` is greater than 1. |
@@ -361,6 +375,7 @@ Represents a chat completion response returned by model, based on the provided i
361375

362376
### Detail
363377

378+
Details for the [UnprocessableContentError](#unprocessablecontenterror) error.
364379

365380
| Name | Type | Description |
366381
| --- | --- | --- |
@@ -371,14 +386,14 @@ Represents a chat completion response returned by model, based on the provided i
371386

372387
The function that the model called.
373388

374-
375389
| Name | Type | Description |
376390
| --- | --- | --- |
377391
| arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may generate incorrect parameters not defined by your function schema. Validate the arguments in your code before calling your function. |
378392
| name | string | The name of the function to call. |
379393

380394
### FunctionObject
381395

396+
Definition of a function the model has access to.
382397

383398
| Name | Type | Description |
384399
| --- | --- | --- |
@@ -407,6 +422,7 @@ The type of the tool. Currently, only `function` is supported.
407422
### TooManyRequestsError
408423

409424

425+
410426
| Name | Type | Description |
411427
| --- | --- | --- |
412428
| error | string | The error description. |
@@ -424,11 +440,12 @@ The type of the tool. Currently, only `function` is supported.
424440

425441
### UnprocessableContentError
426442

443+
The request contains unprocessable content. The error is returned when the payload indicated is valid according to this specification. However, some of the instructions indicated in the payload are not supported by the underlying model. Use the `details` section to understand the offending parameter.
427444

428445
| Name | Type | Description |
429446
| --- | --- | --- |
430447
| code | string | The error code. |
431448
| detail | [Detail](#detail) | |
432449
| error | string | The error description. |
433450
| message | string | The error message. |
434-
| status | integer | The HTTP status code. |
451+
| status | integer | The HTTP status code. |

0 commit comments

Comments
 (0)