You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/api-version-deprecation.md
+9-1Lines changed: 9 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ This article is to help you understand the support lifecycle for the Azure OpenA
24
24
Azure OpenAI API version [2024-04-01-preview](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-04-01-preview/inference.json)
25
25
is currently the latest preview release.
26
26
27
-
This version contains support for all the latest Azure OpenAI features including:
27
+
This version contains support for the latest Azure OpenAI features including:
28
28
29
29
-[Embeddings `encoding_format` and `dimensions` parameters][**Added in 2024-03-01-preview**]
30
30
-[Assistants API](./assistants-reference.md). [**Added in 2024-02-15-preview**]
@@ -35,6 +35,14 @@ This version contains support for all the latest Azure OpenAI features including
35
35
-[Function calling](./how-to/function-calling.md)[**Added in 2023-07-01-preview**]
36
36
-[Retrieval augmented generation with the on your data feature](./use-your-data-quickstart.md). [**Added in 2023-06-01-preview**]
37
37
38
+
## Changes between 2024-03-01-preview and 2024-04-01-preview API specification
39
+
40
+
-**Breaking Change**: Enhancements parameters removed. This impacts the `gpt-4`**Version:**`vision-preview` model.
Azure OpenAI API version [2024-02-01](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2024-02-01/inference.json)
@@ -222,6 +224,7 @@ POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
222
224
-`2023-12-01-preview` (retiring July 1, 2024) (This version or greater required for Vision scenarios) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview)
@@ -237,7 +240,7 @@ The request body consists of a series of messages. The model will generate a res
237
240
|`content`| string or array | Yes | N/A | The content of the message. It must be a string, unless in a Vision-enabled scenario. If it's part of the `user` message, using the GPT-4 Turbo with Vision model, with the latest API version, then `content` must be an array of structures, where each item represents either text or an image: <ul><li> `text`: input text is represented as a structure with the following properties: </li> <ul> <li> `type` = "text" </li> <li> `text` = the input text </li> </ul> <li> `images`: an input image is represented as a structure with the following properties: </li><ul> <li> `type` = "image_url" </li> <li> `image_url` = a structure with the following properties: </li> <ul> <li> `url` = the image URL </li> <li>(optional) `detail` = `high`, `low`, or `auto` </li> </ul> </ul> </ul>|
238
241
|`contentPart`| object | No | N/A | Part of a user's multi-modal message. It can be either text type or image type. If text, it will be a text string. If image, it will be a `contentPartImage` object. |
239
242
|`contentPartImage`| object | No | N/A | Represents a user-uploaded image. It has a `url` property, which is either a URL of the image or the base 64 encoded image data. It also has a `detail` property which can be `auto`, `low`, or `high`.|
240
-
|`enhancements`| object | No | N/A | Represents the Vision enhancement features requested for the chat. It has `grounding` and `ocr` properties, each has a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service [This preview parameter is not available in the `2024-02-01` GA API].|
243
+
|`enhancements`| object | No | N/A | Represents the Vision enhancement features requested for the chat. It has `grounding` and `ocr` properties, each has a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service [This preview parameter is not available in the `2024-02-01` GA API and is no longer available in preview APIs after `2024-03-01-preview`.]|
241
244
|`dataSources`| object | No | N/A | Represents additional resource data. Computer Vision resource data is needed for Vision enhancement. It has a `type` property, which should be `"AzureComputerVision"` and a `parameters` property, which has an `endpoint` and `key` property. These strings should be set to the endpoint URL and access key of your Computer Vision resource.|
@@ -625,6 +635,7 @@ POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
625
635
|```prompt```| string | No | Null | An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.<br/><br/>For more information about prompts including example use cases, see the [OpenAI documentation](https://platform.openai.com/docs/guides/speech-to-text/supported-languages). |
626
636
|```response_format```| string | No | json | The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.<br/><br/>The default value is *json*. |
627
637
|```temperature```| number | No | 0 | The sampling temperature, between 0 and 1.<br/><br/>Higher values like 0.8 makes the output more random, while lower values like 0.2 make it more focused and deterministic. If set to 0, the model uses [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.<br/><br/>The default value is *0*. |
638
+
|```timestamp_granularities```| array | Optional | segment | The timestamp granularities to populate for this transcription. `response_format` must be set `verbose_json` to use timestamp granularities. Either or both of these options are supported: `word`, or `segment`. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency. [**Added in 2024-04-01-prevew**]|
628
639
629
640
#### Example request
630
641
@@ -751,6 +762,8 @@ POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
0 commit comments