You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/models.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -258,7 +258,7 @@ Details about maximum request tokens and training data are available in the foll
258
258
|`gpt-4o-mini-realtime-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
259
259
|`gpt-4o-audio-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for audio and text generation. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
260
260
|`gpt-4o-realtime-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
261
-
|`gpt-4o-realtime-preview` (2024-10-01) <br> **GPT-4o audio**|**Audio model** for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
261
+
|`gpt-4o-mini-realtime-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
262
262
263
263
To compare the availability of GPT-4o audio models across all regions, see the [models table](#global-standard-model-availability).
Azure OpenAI GPT-4o Realtime API for speech and audio is part of the GPT-4o model family that supports low-latency, "speech in, speech out" conversational interactions.
20
+
21
+
You can use the Realtime API via WebRTC or WebSocket to send audio input to the model and receive audio responses in real time. Follow the instructions in this article to get started with the Realtime API via WebSockets.
22
+
23
+
Use the Realtime API via WebSockets in server-to-server scenarios where low latency isn't a requirement.
24
+
25
+
> [!TIP]
26
+
> In most cases, we recommend using the [Realtime API via WebRTC](./realtime-audio-webrtc.md) for real-time audio streaming in client-side applications such as a web application or mobile app. WebRTC is designed for low-latency, real-time audio streaming and is the best choice for most use cases.
27
+
28
+
## Supported models
29
+
30
+
The GPT-4o real-time models are available for global deployments in [East US 2 and Sweden Central regions](../concepts/models.md#global-standard-model-availability).
31
+
-`gpt-4o-mini-realtime-preview` (2024-12-17)
32
+
-`gpt-4o-realtime-preview` (2024-12-17)
33
+
34
+
You should use API version `2025-04-01-preview` in the URL for the Realtime API.
35
+
36
+
For more information about supported models, see the [models and versions documentation](../concepts/models.md#audio-models).
37
+
38
+
## Prerequisites
39
+
40
+
Before you can use GPT-4o real-time audio, you need:
41
+
42
+
- An Azure subscription - <ahref="https://azure.microsoft.com/free/cognitive-services"target="_blank">Create one for free</a>.
43
+
- An Azure OpenAI resource created in a [supported region](#supported-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](create-resource.md).
44
+
- You need a deployment of the `gpt-4o-realtime-preview` or `gpt-4o-mini-realtime-preview` model in a supported region as described in the [supported models](#supported-models) section. You can deploy the model from the [Azure AI Foundry portal model catalog](../../../ai-foundry/how-to/model-catalog-overview.md) or from your project in Azure AI Foundry portal.
45
+
46
+
## Connection and authentication
47
+
48
+
The Realtime API (via `/realtime`) is built on [the WebSockets API](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) to facilitate fully asynchronous streaming communication between the end user and model.
49
+
50
+
The Realtime API is accessed via a secure WebSocket connection to the `/realtime` endpoint of your Azure OpenAI resource.
51
+
52
+
You can construct a full request URI by concatenating:
53
+
54
+
- The secure WebSocket (`wss://`) protocol.
55
+
- Your Azure OpenAI resource endpoint hostname, for example, `my-aoai-resource.openai.azure.com`
56
+
- The `openai/realtime` API path.
57
+
- An `api-version` query string parameter for a supported API version such as `2024-12-17`
58
+
- A `deployment` query string parameter with the name of your `gpt-4o-realtime-preview` or `gpt-4o-mini-realtime-preview` model deployment.
59
+
60
+
The following example is a well-constructed `/realtime` request URI:
-**Microsoft Entra** (recommended): Use token-based authentication with the `/realtime` API for an Azure OpenAI resource with managed identity enabled. Apply a retrieved authentication token using a `Bearer` token with the `Authorization` header.
68
+
-**API key**: An `api-key` can be provided in one of two ways:
69
+
- Using an `api-key` connection header on the prehandshake connection. This option isn't available in a browser environment.
70
+
- Using an `api-key` query string parameter on the request URI. Query string parameters are encrypted when using https/wss.
71
+
72
+
## Realtime API via WebSockets architecture
73
+
74
+
Once the WebSocket connection session to `/realtime` is established and authenticated, the functional interaction takes place via events for sending and receiving WebSocket messages. These events each take the form of a JSON object.
75
+
76
+
:::image type="content" source="../media/how-to/real-time/realtime-api-sequence.png" alt-text="Diagram of the Realtime API authentication and connection sequence." lightbox="../media/how-to/real-time/realtime-api-sequence.png":::
77
+
78
+
<!--
79
+
sequenceDiagram
80
+
actor User as End User
81
+
participant MiddleTier as /realtime host
82
+
participant AOAI as Azure OpenAI
83
+
User->>MiddleTier: Begin interaction
84
+
MiddleTier->>MiddleTier: Authenticate/Validate User
AOAI--)MiddleTier: (within items) create/stream/finish content parts
99
+
-->
100
+
101
+
Events can be sent and received in parallel and applications should generally handle them both concurrently and asynchronously.
102
+
103
+
- A client-side caller establishes a connection to `/realtime`, which starts a new [`session`](../realtime-audio-reference.md#realtimerequestsession).
104
+
- A `session` automatically creates a default `conversation`. Multiple concurrent conversations aren't supported.
105
+
- The `conversation` accumulates input signals until a `response` is started, either via a direct event by the caller or automatically by voice activity detection (VAD).
106
+
- Each `response` consists of one or more `items`, which can encapsulate messages, function calls, and other information.
107
+
- Each message `item` has `content_part`, allowing multiple modalities (text and audio) to be represented across a single item.
108
+
- The `session` manages configuration of caller input handling (for example, user audio) and common output generation handling.
109
+
- Each caller-initiated [`response.create`](../realtime-audio-reference.md#realtimeclienteventresponsecreate) can override some of the output [`response`](../realtime-audio-reference.md#realtimeresponse) behavior, if desired.
110
+
- Server-created `item` and the `content_part` in messages can be populated asynchronously and in parallel. For example, receiving audio, text, and function information concurrently in a round robin fashion.
111
+
112
+
## Try the quickstart
113
+
114
+
Now that you have the prerequisites, you can follow the instructions in the [Realtime API quickstart](../realtime-audio-quickstart.md) to get started with the Realtime API via WebSockets.
115
+
116
+
## Related content
117
+
118
+
* Try the [real-time audio quickstart](../realtime-audio-quickstart.md)
119
+
* See the [Realtime API reference](../realtime-audio-reference.md)
120
+
* Learn more about Azure OpenAI [quotas and limits](../quotas-limits.md)
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/realtime-audio.md
+11-75Lines changed: 11 additions & 75 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,11 @@
1
1
---
2
-
title: 'How to use the GPT-4o Realtime API for speech and audio with Azure OpenAI Service'
3
-
titleSuffix: Azure OpenAI
4
-
description: Learn how to use the GPT-4o Realtime API for speech and audio with Azure OpenAI Service.
2
+
title: 'How to use the GPT-4o Realtime API for speech and audio with Azure OpenAI'
3
+
titleSuffix: Azure OpenAI Service
4
+
description: Learn how to use the GPT-4o Realtime API for speech and audio with Azure OpenAI.
5
5
manager: nitinme
6
6
ms.service: azure-ai-openai
7
7
ms.topic: how-to
8
-
ms.date: 3/20/2025
8
+
ms.date: 4/28/2025
9
9
author: eric-urban
10
10
ms.author: eur
11
11
ms.custom: references_regions
@@ -20,12 +20,17 @@ Azure OpenAI GPT-4o Realtime API for speech and audio is part of the GPT-4o mode
20
20
21
21
Most users of the Realtime API need to deliver and receive audio from an end-user in real time, including applications that use WebRTC or a telephony system. The Realtime API isn't designed to connect directly to end user devices and relies on client integrations to terminate end user audio streams.
22
22
23
+
You can use the Realtime API via WebRTC or WebSocket to send audio input to the model and receive audio responses in real time. In most cases, we recommend using the WebRTC API for low-latency real-time audio streaming. For more information, see:
24
+
-[Realtime API via WebRTC](./realtime-audio-webrtc.md)
25
+
-[Realtime API via WebSockets](./realtime-audio-websockets.md)
26
+
23
27
## Supported models
24
28
25
29
The GPT 4o real-time models are available for global deployments in [East US 2 and Sweden Central regions](../concepts/models.md#global-standard-model-availability).
26
30
-`gpt-4o-mini-realtime-preview` (2024-12-17)
27
31
-`gpt-4o-realtime-preview` (2024-12-17)
28
-
-`gpt-4o-realtime-preview` (2024-10-01)
32
+
33
+
You should use API version `2025-04-01-preview` in the URL for the Realtime API.
29
34
30
35
See the [models and versions documentation](../concepts/models.md#audio-models) for more information.
31
36
@@ -39,78 +44,9 @@ Before you can use GPT-4o real-time audio, you need:
39
44
40
45
Here are some of the ways you can get started with the GPT-4o Realtime API for speech and audio:
41
46
- For steps to deploy and use the `gpt-4o-realtime-preview` or `gpt-4o-mini-realtime-preview` model, see [the real-time audio quickstart](../realtime-audio-quickstart.md).
42
-
-Download the sample code from the [Azure OpenAI GPT-4o real-time audio repository on GitHub](https://github.com/azure-samples/aoai-realtime-audio-sdk).
47
+
-Try the [WebRTC via HTML and JavaScript example](./realtime-audio-webrtc.md#webrtc-example-via-html-and-javascript) to get started with the Realtime API via WebRTC.
43
48
-[The Azure-Samples/aisearch-openai-rag-audio repo](https://github.com/Azure-Samples/aisearch-openai-rag-audio) contains an example of how to implement RAG support in applications that use voice as their user interface, powered by the GPT-4o realtime API for audio.
44
49
45
-
## Connection and authentication
46
-
47
-
The Realtime API (via `/realtime`) is built on [the WebSockets API](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) to facilitate fully asynchronous streaming communication between the end user and model.
48
-
49
-
> [!IMPORTANT]
50
-
> Device details like capturing and rendering audio data are outside the scope of the Realtime API. It should be used in the context of a trusted, intermediate service that manages both connections to end users and model endpoint connections. Don't use it directly from untrusted end user devices.
51
-
52
-
The Realtime API is accessed via a secure WebSocket connection to the `/realtime` endpoint of your Azure OpenAI resource.
53
-
54
-
You can construct a full request URI by concatenating:
55
-
56
-
- The secure WebSocket (`wss://`) protocol.
57
-
- Your Azure OpenAI resource endpoint hostname, for example, `my-aoai-resource.openai.azure.com`
58
-
- The `openai/realtime` API path.
59
-
- An `api-version` query string parameter for a supported API version such as `2024-12-17`
60
-
- A `deployment` query string parameter with the name of your `gpt-4o-realtime-preview` or `gpt-4o-mini-realtime-preview` model deployment.
61
-
62
-
The following example is a well-constructed `/realtime` request URI:
-**Microsoft Entra** (recommended): Use token-based authentication with the `/realtime` API for an Azure OpenAI Service resource with managed identity enabled. Apply a retrieved authentication token using a `Bearer` token with the `Authorization` header.
70
-
-**API key**: An `api-key` can be provided in one of two ways:
71
-
- Using an `api-key` connection header on the prehandshake connection. This option isn't available in a browser environment.
72
-
- Using an `api-key` query string parameter on the request URI. Query string parameters are encrypted when using https/wss.
73
-
74
-
## Realtime API architecture
75
-
76
-
Once the WebSocket connection session to `/realtime` is established and authenticated, the functional interaction takes place via events for sending and receiving WebSocket messages. These events each take the form of a JSON object.
77
-
78
-
:::image type="content" source="../media/how-to/real-time/realtime-api-sequence.png" alt-text="Diagram of the Realtime API authentication and connection sequence." lightbox="../media/how-to/real-time/realtime-api-sequence.png":::
79
-
80
-
<!--
81
-
sequenceDiagram
82
-
actor User as End User
83
-
participant MiddleTier as /realtime host
84
-
participant AOAI as Azure OpenAI
85
-
User->>MiddleTier: Begin interaction
86
-
MiddleTier->>MiddleTier: Authenticate/Validate User
AOAI--)MiddleTier: (within items) create/stream/finish content parts
101
-
-->
102
-
103
-
Events can be sent and received in parallel and applications should generally handle them both concurrently and asynchronously.
104
-
105
-
- A client-side caller establishes a connection to `/realtime`, which starts a new [`session`](#session-configuration).
106
-
- A `session` automatically creates a default `conversation`. Multiple concurrent conversations aren't supported.
107
-
- The `conversation` accumulates input signals until a `response` is started, either via a direct event by the caller or automatically by voice activity detection (VAD).
108
-
- Each `response` consists of one or more `items`, which can encapsulate messages, function calls, and other information.
109
-
- Each message `item` has `content_part`, allowing multiple modalities (text and audio) to be represented across a single item.
110
-
- The `session` manages configuration of caller input handling (for example, user audio) and common output generation handling.
111
-
- Each caller-initiated [`response.create`](../realtime-audio-reference.md#realtimeclienteventresponsecreate) can override some of the output [`response`](../realtime-audio-reference.md#realtimeresponse) behavior, if desired.
112
-
- Server-created `item` and the `content_part` in messages can be populated asynchronously and in parallel. For example, receiving audio, text, and function information concurrently in a round robin fashion.
113
-
114
50
## Session configuration
115
51
116
52
Often, the first event sent by the caller on a newly established `/realtime` session is a [`session.update`](../realtime-audio-reference.md#realtimeclienteventsessionupdate) payload. This event controls a wide set of input and output behavior, with output and response generation properties then later overridable using the [`response.create`](../realtime-audio-reference.md#realtimeclienteventresponsecreate) event.
0 commit comments