You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Support for containers is currently available with Document Intelligence version `2022-08-31 (GA)` for all models and `2023-07-31 (GA)` for Read, Layout, ID Document, Receipt, and Invoice models:
23
-
24
-
*[REST API `2022-08-31 (GA)`](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)
25
-
*[REST API `2023-07-31 (GA)`](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.1%20(2023-07-31)&tabs=HTTP&preserve-view=true)
26
-
*[Client libraries targeting `REST API 2022-08-31 (GA)`](../sdk-overview-v3-0.md)
27
-
*[Client libraries targeting `REST API 2023-07-31 (GA)`](../sdk-overview-v3-1.md)
28
-
29
-
✔️ See [**Install and run Document Intelligence containers**](?view=doc-intel-3.1.0&preserve-view=true) for supported container documentation.
Azure AI Document Intelligence is an Azure AI service that lets you build automated data processing software using machine-learning technology. Document Intelligence enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your documents. The results are delivered as structured data that ../includes the relationships in the original file.
38
24
39
-
In this article you learn how to download, install, and run Document Intelligence containers. Containers enable you to run the Document Intelligence service in your own environment. Containers are great for specific security and data governance requirements.
25
+
In this article you can learn how to download, install, and run Document Intelligence containers. Containers enable you to run the Document Intelligence service in your own environment. Containers are great for specific security and data governance requirements.
40
26
41
27
***Read**, **Layout**, **ID Document**, **Receipt**, and **Invoice** models are supported by Document Intelligence v3.1 containers.
42
28
43
29
***Read**, **Layout**, **General Document**, **Business Card**, and **Custom** models are supported by Document Intelligence v3.0 containers.
44
30
31
+
## Version support
32
+
33
+
Support for containers is currently available with Document Intelligence version `v3.0: 2022-08-31 (GA)` for all models and `v3.1 2023-07-31 (GA)` for Read, Layout, ID Document, Receipt, and Invoice models:
34
+
35
+
*[REST API `v3.0: 2022-08-31 (GA)`](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)
36
+
*[REST API `v3.1: 2023-07-31 (GA)`](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.1%20(2023-07-31)&tabs=HTTP&preserve-view=true)
37
+
*[Client libraries targeting `REST API v3.0: 2022-08-31 (GA)`](../sdk-overview-v3-0.md)
38
+
*[Client libraries targeting `REST API v3.1: 2023-07-31 (GA)`](../sdk-overview-v3-1.md)
39
+
45
40
## Prerequisites
46
41
47
42
To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
When the content filtering system detects harmful content, you receive either an error on the API call if the prompt was deemed inappropriate, or the `finish_reason` on the response will be `content_filter` to signify that some of the completion was filtered. When building your application or system, you'll want to account for these scenarios where the content returned by the Completions API is filtered, which might result in content that is incomplete. How you act on this information will be application specific. The behavior can be summarized in the following points:
@@ -1012,4 +1010,4 @@ As part of your application design, consider the following best practices to del
1012
1010
- Apply for modified content filters via [this form](https://ncv.microsoft.com/uEfCgnITdR).
1013
1011
- Azure OpenAI content filtering is powered by [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety).
1014
1012
- Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context).
1015
-
- Learn more about how data is processed in connection with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
1013
+
- Learn more about how data is processed in connection with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/model-retirements.md
+4-1Lines changed: 4 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ titleSuffix: Azure OpenAI
4
4
description: Learn about the model deprecations and retirements in Azure OpenAI.
5
5
ms.service: azure-ai-openai
6
6
ms.topic: conceptual
7
-
ms.date: 09/26/2024
7
+
ms.date: 10/02/2024
8
8
ms.custom:
9
9
manager: nitinme
10
10
author: mrbullwinkle
@@ -91,6 +91,8 @@ These models are currently available for use in Azure OpenAI Service.
91
91
92
92
| Model | Version | Retirement date | Suggested replacements |
93
93
| ---- | ---- | ---- | --- |
94
+
|`dall-e-2`| 2 | January 27, 2025 |`dalle-3`|
95
+
|`dall-e-3`| 3 | No earlier than April 30, 2025 ||
94
96
|`gpt-35-turbo`| 0301 | January 27, 2025<br><br> Deployments set to [**Auto-update to default**](/azure/ai-services/openai/how-to/working-with-models?tabs=powershell#auto-update-to-default) will be automatically upgraded to version: `0125`, starting on November 13, 2024. |`gpt-35-turbo` (0125) <br><br> `gpt-4o-mini`|
95
97
|`gpt-35-turbo`<br>`gpt-35-turbo-16k`| 0613 | January 27, 2025 <br><br> Deployments set to [**Auto-update to default**](/azure/ai-services/openai/how-to/working-with-models?tabs=powershell#auto-update-to-default) will be automatically upgraded to version: `0125`, starting on November 13, 2024. |`gpt-35-turbo` (0125) <br><br> `gpt-4o-mini`|
96
98
|`gpt-35-turbo`| 1106 | No earlier than January 27, 2025 <br><br> Deployments set to [**Auto-update to default**](/azure/ai-services/openai/how-to/working-with-models?tabs=powershell#auto-update-to-default) will be automatically upgraded to version: `0125`, starting on November 13, 2024. |`gpt-35-turbo` (0125) <br><br> `gpt-4o-mini`|
@@ -100,6 +102,7 @@ These models are currently available for use in Azure OpenAI Service.
100
102
|`gpt-4`| 1106-preview | To be upgraded to `gpt-4` version: `turbo-2024-04-09`, starting no sooner than January 27, 2025 **<sup>1</sup>**|`gpt-4o`|
101
103
|`gpt-4`| 0125-preview |To be upgraded to `gpt-4` version: `turbo-2024-04-09`, starting no sooner than January 27, 2025 **<sup>1</sup>**|`gpt-4o`|
102
104
|`gpt-4`| vision-preview | To be upgraded to `gpt-4` version: `turbo-2024-04-09`, starting no sooner than January 27, 2025 **<sup>1</sup>**|`gpt-4o`|
105
+
|`gpt-4o`| 2024-05-13 | No earlier than March 20, 2025 <br><br>Deployments set to [**Auto-update to default**](/azure/ai-services/openai/how-to/working-with-models?tabs=powershell#auto-update-to-default) will be automatically upgraded to version: `2024-08-06`, starting on December 5, 2024. ||
103
106
|`gpt-3.5-turbo-instruct`| 0914 | No earlier than Sep 14, 2025 ||
104
107
|`text-embedding-ada-002`| 2 | No earlier than April 3, 2025 |`text-embedding-3-small` or `text-embedding-3-large`|
105
108
|`text-embedding-ada-002`| 1 | No earlier than April 3, 2025 |`text-embedding-3-small` or `text-embedding-3-large`|
@@ -18,6 +18,7 @@ Azure OpenAI Service is powered by a diverse set of models with different capabi
18
18
19
19
| Models | Description |
20
20
|--|--|
21
+
|[o1-preview and o1-mini](#o1-preview-and-o1-mini-models-limited-access)| Limited access models, specifically designed to tackle reasoning and problem-solving tasks with increased focus and capability. |
21
22
|[GPT-4o & GPT-4o mini & GPT-4 Turbo](#gpt-4o-and-gpt-4-turbo)| The latest most capable Azure OpenAI models with multimodal versions, which can accept both text and images as input. |
22
23
|[GPT-4o audio](#gpt-4o-audio)| A GPT-4o model that supports low-latency, "speech in, speech out" conversational interactions. |
23
24
|[GPT-4](#gpt-4)| A set of models that improve on GPT-3.5 and can understand and generate natural language and code. |
@@ -31,18 +32,28 @@ Azure OpenAI Service is powered by a diverse set of models with different capabi
31
32
32
33
The Azure OpenAI `o1-preview` and `o1-mini` models are specifically designed to tackle reasoning and problem-solving tasks with increased focus and capability. These models spend more time processing and understanding the user's request, making them exceptionally strong in areas like science, coding, and math compared to previous iterations.
33
34
34
-
### Availability
35
+
| Model ID | Description | Max Request (tokens) | Training Data (up to) |
36
+
| --- | :--- |:--- |:---: |
37
+
|`o1-preview` (2024-09-12) | The most capable model in the o1 series, offering enhanced reasoning abilities.| Input: 128,000 <br> Output: 32,768 | Oct 2023 |
38
+
|`o1-mini` (2024-09-12) | A faster and more cost-efficient option in the o1 series, ideal for coding tasks requiring speed and lower resource consumption.| Input: 128,000 <br> Output: 65,536 | Oct 2023 |
35
39
36
-
The `o1-preview` and `o1-mini` models are available in the East US2 region for limited access through the [AI Studio](https://ai.azure.com) early access playground. Data processing for the `o1` models may occur in a different region than where they are available for use.
40
+
### Availability
37
41
38
-
To try the `o1-preview` and `o1-mini` models in the early access playground, **registration is required, and access will be granted based on Microsoft’s eligibility criteria**.
42
+
The `o1-preview` and `o1-mini` models are now available for API access and model deployment. **Registration is required, and access will be granted based on Microsoft's eligibility criteria**.
39
43
40
44
Request access: [limited access model application](https://aka.ms/oai/modelaccess)
41
45
42
-
Once access has been granted, you will need to:
46
+
Once access has been granted, you will need to create a deployment for each model.
47
+
48
+
### API support
49
+
50
+
Support for the **o1 series** models was added in API version `2024-09-01-preview`.
51
+
52
+
The `max_tokens` parameter has been deprecated and replaced with the new `max_completion_tokens` parameter. **o1 series** models will only work with the `max_completions_tokens` parameter. `max_completions_tokens` is backwards compatible with `max_tokens`.
53
+
54
+
### Region availability
43
55
44
-
1. Navigate to https://ai.azure.com/resources and select a resource in the `eastus2` region. If you do not have an Azure OpenAI resource in this region you will need to [create one](https://portal.azure.com/#create/Microsoft.CognitiveServicesOpenAI).
45
-
2. Once the `eastus2` Azure OpenAI resource is selected, in the upper left-hand panel under **Playgrounds** select **Early access playground (preview)**.
56
+
Available for standard and global standard deployment in East US2 and Sweden Central for approved customers.
46
57
47
58
## GPT-4o audio
48
59
@@ -289,6 +300,7 @@ The following models support global batch:
289
300
290
301
| Model | Version | Input format |
291
302
|---|---|---|
303
+
|`gpt-4o`| 2024-08-06 |text + image |
292
304
|`gpt-4o-mini`| 2024-07-18 | text + image |
293
305
|`gpt-4o`| 2024-05-13 |text + image |
294
306
|`gpt-4`| turbo-2024-04-09 | text |
@@ -421,4 +433,4 @@ For the latest information on model retirements, refer to the [model retirement
421
433
-[Model retirement and deprecation](./model-retirements.md)
422
434
-[Learn more about working with Azure OpenAI models](../how-to/working-with-models.md)
423
435
-[Learn more about Azure OpenAI](../overview.md)
424
-
-[Learn more about fine-tuning Azure OpenAI models](../how-to/fine-tuning.md)
436
+
-[Learn more about fine-tuning Azure OpenAI models](../how-to/fine-tuning.md)
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/audio-real-time.md
+24-15Lines changed: 24 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
-
title: 'How to use GPT-4o real-time audio with Azure OpenAI Service'
2
+
title: 'How to use GPT-4o Realtime API for speech and audio with Azure OpenAI Service'
3
3
titleSuffix: Azure OpenAI
4
-
description: Learn how to use GPT-4o real-time audio with Azure OpenAI Service.
4
+
description: Learn how to use GPT-4o Realtime API for speech and audio with Azure OpenAI Service.
5
5
manager: nitinme
6
6
ms.service: azure-ai-openai
7
7
ms.topic: how-to
@@ -12,11 +12,11 @@ ms.custom: references_regions
12
12
recommendations: false
13
13
---
14
14
15
-
# GPT-4o real-time audio
15
+
# GPT-4o Realtime API for speech and audio (Preview)
16
16
17
-
Azure OpenAI GPT-4o audio is part of the GPT-4o model family that supports low-latency, "speech in, speech out" conversational interactions. The GPT-4o audio `realtime` API is designed to handle real-time, low-latency conversational interactions, making it a great fit for use cases involving live interactions between a user and a model, such as customer support agents, voice assistants, and real-time translators.
17
+
Azure OpenAI GPT-4o Realtime API for speech and audio is part of the GPT-4o model family that supports low-latency, "speech in, speech out" conversational interactions. The GPT-4o audio `realtime` API is designed to handle real-time, low-latency conversational interactions, making it a great fit for use cases involving live interactions between a user and a model, such as customer support agents, voice assistants, and real-time translators.
18
18
19
-
Most users of this API need to deliver and receive audio from an end-user in real time, including applications that use WebRTC or a telephony system. The real-time API isn't designed to connect directly to end user devices and relies on client integrations to terminate end user audio streams.
19
+
Most users of the Realtime API need to deliver and receive audio from an end-user in real time, including applications that use WebRTC or a telephony system. The Realtime API isn't designed to connect directly to end user devices and relies on client integrations to terminate end user audio streams.
20
20
21
21
## Supported models
22
22
@@ -29,7 +29,7 @@ The `gpt-4o-realtime-preview` model is available for global deployments in [East
29
29
30
30
## API support
31
31
32
-
Support for real-time audio was first added in API version `2024-10-01-preview`.
32
+
Support for the Realtime API was first added in API version `2024-10-01-preview`.
33
33
34
34
> [!NOTE]
35
35
> For more information about the API and architecture, see the [Azure OpenAI GPT-4o real-time audio repository on GitHub](https://github.com/azure-samples/aoai-realtime-audio-sdk).
@@ -43,7 +43,7 @@ Support for real-time audio was first added in API version `2024-10-01-preview`.
43
43
44
44
Before you can use GPT-4o real-time audio, you need a deployment of the `gpt-4o-realtime-preview` model in a supported region as described in the [supported models](#supported-models) section.
45
45
46
-
You can deploy the model from the Azure OpenAI model catalog or from your project in AI Studio. Follow these steps to deploy a `gpt-4o-realtime-preview` model from the [AI Studio model catalog](../../../ai-studio/how-to/model-catalog-overview.md):
46
+
You can deploy the model from the [Azure AI Studio model catalog](../../../ai-studio/how-to/model-catalog-overview.md) or from your project in AI Studio. Follow these steps to deploy a `gpt-4o-realtime-preview` model from the model catalog:
47
47
48
48
1. Sign in to [AI Studio](https://ai.azure.com) and go to the **Home** page.
49
49
1. Select **Model catalog** from the left sidebar.
@@ -54,17 +54,20 @@ You can deploy the model from the Azure OpenAI model catalog or from your projec
54
54
1. Modify other default settings depending on your requirements.
55
55
1. Select **Deploy**. You land on the deployment details page.
56
56
57
-
Now that you have a deployment of the `gpt-4o-realtime-preview` model, you can use the playground to interact with the model in real time. Select **Early access playground** from the list of playgrounds in the left pane.
57
+
Now that you have a deployment of the `gpt-4o-realtime-preview` model, you can use the Realtime API to interact with it in real time.
58
58
59
-
## Use the GPT-4o real-time audio API
59
+
## Use the GPT-4o Realtime API
60
60
61
61
> [!TIP]
62
62
> A playground for GPT-4o real-time audio is coming soon to [Azure AI Studio](https://ai.azure.com). You can already use the API directly in your application.
63
63
64
-
Right now, the fastest way to get started with GPT-4o real-time audio is to download the sample code from the [Azure OpenAI GPT-4o real-time audio repository on GitHub](https://github.com/azure-samples/aoai-realtime-audio-sdk).
64
+
Right now, the fastest way to get started with the GPT-4o Realtime API is to download the sample code from the [Azure OpenAI GPT-4o real-time audio repository on GitHub](https://github.com/azure-samples/aoai-realtime-audio-sdk).
65
+
66
+
The JavaScript web sample demonstrates how to use the GPT-4o Realtime API to interact with the model in real time. The sample code includes a simple web interface that captures audio from the user's microphone and sends it to the model for processing. The model responds with text and audio, which the sample code renders in the web interface.
67
+
68
+
You can run the sample code locally on your machine by following these steps. Refer to the [repository on GitHub](https://github.com/azure-samples/aoai-realtime-audio-sdk) for the most up-to-date instructions.
69
+
1. If you don't have Node.js installed, download and install the [LTS version of Node.js](https://nodejs.org/).
65
70
66
-
The JavaScript web sample demonstrates how to use the GPT-4o real-time audio API to interact with the model in real time. The sample code includes a simple web interface that captures audio from the user's microphone and sends it to the model for processing. The model responds with text and audio, which the sample code renders in the web interface.
67
-
68
71
1. Clone the repository to your local machine:
69
72
70
73
```bash
@@ -74,12 +77,18 @@ The JavaScript web sample demonstrates how to use the GPT-4o real-time audio API
74
77
1. Go to the `javascript/samples/web` folder in your preferred code editor.
75
78
76
79
```bash
77
-
cd .\javascript\samples\web\
80
+
cd ./javascript/samples
78
81
```
79
82
80
-
1. If you don't have Node.js installed, download and install the [LTS version of Node.js](https://nodejs.org/).
83
+
1. Run `download-pkg.ps1` or `download-pkg.sh` to download the required packages.
84
+
85
+
1. Go to the `web` folder from the `./javascript/samples` folder.
86
+
87
+
```bash
88
+
cd ./web
89
+
```
81
90
82
-
1. Run `npm install` to download a few dependency packages. For more information, see the `package.json` file in the same `web` folder.
91
+
1. Run `npm install` to install package dependencies.
83
92
84
93
1. Run `npm run dev` to start the web server, navigating any firewall permissions prompts as needed.
85
94
1. Go to any of the provided URIs from the console output (such as `http://localhost:5173/`) in a browser.
0 commit comments