Skip to content

Commit 238e73b

Browse files
committed
init realtime api how to
1 parent 339cd48 commit 238e73b

File tree

1 file changed

+117
-7
lines changed

1 file changed

+117
-7
lines changed

articles/ai-services/openai/how-to/realtime-audio.md

Lines changed: 117 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -35,25 +35,135 @@ Support for the Realtime API was first added in API version `2024-10-01-preview`
3535
> [!NOTE]
3636
> For more information about the API and architecture, see the [Azure OpenAI GPT-4o real-time audio repository on GitHub](https://github.com/azure-samples/aoai-realtime-audio-sdk).
3737
38-
## Prerequisites
39-
40-
41-
## Deploy a model for real-time audio
38+
## Get started
4239

4340
Before you can use GPT-4o real-time audio, you need:
4441

4542
- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
4643
- An Azure OpenAI resource created in a [supported region](#supported-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](create-resource.md).
44+
- You need a deployment of the `gpt-4o-realtime-preview` model in a supported region as described in the [supported models](#supported-models) section. You can deploy the model from the [Azure AI Studio model catalog](../../ai-studio/how-to/model-catalog-overview.md) or from your project in AI Studio.
4745

46+
For steps to deploy and use the `gpt-4o-realtime-preview` model, see [the real-time audio quickstart](../realtime-audio-quickstart.md).
4847

49-
## Use the GPT-4o real-time audio
48+
For more information about the API and architecture, see the remaining sections in this guide.
5049

51-
You need a deployment of the `gpt-4o-realtime-preview` model in a supported region as described in the [supported models](#supported-models) section. You can deploy the model from the [Azure AI Studio model catalog](../../ai-studio/how-to/model-catalog-overview.md) or from your project in AI Studio.
5250

53-
For steps to deploy and use the `gpt-4o-realtime-preview` model, see [the real-time audio quickstart](../realtime-audio-quickstart.md).
51+
## Connection and authentication with the Realtime API
5452

53+
The `/realtime` API requires an existing Azure OpenAI resource endpoint in a supported region. A full request URI can be constructed by concatenating:
5554

55+
1. The secure WebSocket (`wss://`) protocol
56+
2. Your Azure OpenAI resource endpoint hostname, e.g. `my-aoai-resource.openai.azure.com`
57+
3. The `openai/realtime` API path
58+
4. An `api-version` query string parameter for a supported API version -- initially, `2024-10-01-preview`
59+
5. A `deployment` query string parameter with the name of your `gpt-4o-realtime-preview` model deployment
5660

61+
Combining into a full example, the following could be a well-constructed `/realtime` request URI:
62+
63+
```http
64+
wss://my-eastus2-openai-resource.openai.azure.com/openai/realtime?api-version=2024-10-01-preview&deployment=gpt-4o-realtime-preview-1001
65+
```
66+
67+
To authenticate:
68+
- **Using Microsoft Entra**: `/realtime` supports token-based authentication with against an appropriately configured Azure OpenAI Service resource that has managed identity enabled. Use a `Bearer` token with the `Authorization` header to apply a retrieved authentication token.
69+
- **Using an API key**: An `api-key` can be provided in one of two ways:
70+
1. Using an `api-key` connection header on the pre-handshake connection (note: not available in a browser environment)
71+
2. Using an `api-key` query string parameter on the request URI (note: query string parameters are encrypted when using https/wss)
72+
73+
74+
## Architecture
75+
76+
The Realtime API (via`/realtime`) is built on [the WebSockets API](https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API) to facilitate fully asynchronous streaming communication between the end user and model. It should be used in the context of a trusted, intermediate service that manages both connections to end users and model endpoint connections. Don't use it directly from untrusted end user devices, and device details like capturing and rendering audio data are outside the scope of the Realtime API.
77+
78+
## API concepts
79+
80+
- A caller establishes a connection to `/realtime`, which starts a new `session`.
81+
- A `session` automatically creates a default `conversation`. Multiple concurrent conversations aren't supported.
82+
- The `conversation` accumulates input signals until a `response` is started, either via a direct command by the caller or automatically by voice-activity-based (VAD) turn detection.
83+
- Each `response` consists of one or more `items`, which can encapsulate messages, function calls, and other information.
84+
- Each message `item` has `content_part`, allowing multiple modalities (text and audio) to be represented across a single item.
85+
- The `session` manages configuration of caller input handling (for example, user audio) and common output generation handling.
86+
- Each caller-initiated `response.create` can override some of the output `response` behavior, if desired.
87+
- Server-created `item` and the `content_part` in messages can be populated asynchronously and in parallel. For example, receiving audio, text, and function information concurrently in a round robin fashion.
88+
89+
## API details
90+
91+
Once the WebSocket connection session to `/realtime` is established and authenticated, the functional interaction takes place via sending and receiving WebSocket messages, herein referred to as "commands" to avoid ambiguity with the content-bearing "message" concept already present for inference. These commands each take the form of a JSON object. Commands can be sent and received in parallel and applications should generally handle them both concurrently and asynchronously.
92+
93+
### Session configuration and turn handling mode
94+
95+
Often, the first command sent by the caller on a newly established `/realtime` session will be a `session.update` payload. This command controls a wide set of input and output behavior, with output and response generation portions then later overridable via `update_conversation_config` or other properties in `response.create`.
96+
97+
One of the key session-wide settings is `turn_detection`, which controls how data flow is handled between the caller and model:
98+
99+
- `server_vad` evaluates incoming user audio (as sent via `add_user_audio`) using a voice activity detector (VAD) component and automatically use that audio to initiate response generation on applicable conversations when an end of speech is detected. Silence detection for the VAD can be configured when specifying `server_vad` detection mode.
100+
- `none` relies on caller-initiated `input_audio_buffer.commit` and `response.create` commands to progress conversations and produce output. This setting is useful for push-to-talk applications or situations that have external audio flow control (such as caller-side VAD component). These manual signals can still be used in `server_vad` mode to supplement VAD-initiated response generation.
101+
102+
Transcription of user input audio is opted into via the `input_audio_transcription` property. Specifying a transcription model (`whisper-1`) in this configuration enables the delivery of `conversation.item.audio_transcription.completed` events.
103+
104+
## Summary of commands
105+
106+
### Requests
107+
108+
The following table describes commands sent from the caller to the `/realtime` endpoint.
109+
110+
| `type` | Description |
111+
|---|---|
112+
| **Session Configuration** | |
113+
| `session.update` | Configures the connection-wide behavior of the conversation session such as shared audio input handling and common response generation characteristics. This is typically sent immediately after connecting but can also be sent at any point during a session to reconfigure behavior after the current response (if in progress) is complete. |
114+
| **Input Audio** | |
115+
| `input_audio_buffer_append` | Appends audio data to the shared user input buffer. This audio won't be processed until an end of speech is detected in the `server_vad` `turn_detection` mode or until a manual `response.create` is sent (in either `turn_detection` configuration). |
116+
| `input_audio_buffer_clear` | Clears the current audio input buffer. This doesn't affect responses already in progress. |
117+
| `input_audio_buffer_commit` | Commits the current state of the user input buffer to subscribed conversations, including it as information for the next response. |
118+
| **Item Management** | For establishing history or including non-audio item information. |
119+
| `item_create` | Inserts a new item into the conversation, optionally positioned according to `previous_item_id`. This property can provide new, non-audio input from the user (such as a text message), tool responses, or historical information from another interaction to form a conversation history before generation. |
120+
| `item_delete` | Removes an item from an existing conversation. |
121+
| `item_truncate` | Manually shortens text and audio content in a message. This property can be useful in situations where faster-than-realtime model generation produced more data that's later skipped by an interruption. |
122+
| **Response Management** |
123+
| `response.create` | Initiates model processing of unprocessed conversation input, signifying the end of the caller's logical turn. `server_vad` `turn_detection` mode automatically triggers generation at end of speech, but `response.create` must be called in other circumstances (such as text input, tool responses, and `none` mode) to signal that the conversation should continue. The `response.create` should be invoked after the `response.done` command from the model that confirms all tool calls and other messages are provided. |
124+
| `response.cancel` | Cancels an in-progress response. |
125+
126+
127+
### Responses
128+
129+
The following table describes commands sent by the `/realtime` endpoint to the caller.
130+
131+
| `type` | Description |
132+
|---|---|
133+
| **Session** | |
134+
| `session_created` | Sent as soon as the connection is successfully established. Provides a connection-specific ID that might be useful for debugging or logging. |
135+
| **Caller Item Acknowledgement** | |
136+
| `item_created` | Provides acknowledgment that a new conversation item is inserted into a conversation. |
137+
| `item_deleted` | Provides acknowledgment that an existing conversation item is removed from a conversation. |
138+
| `item_truncated` | Provides acknowledgment that an existing item in a conversation is truncated. |
139+
| **Response Flow** | |
140+
| `response_created` | Notifies that a new response is started for a conversation. This snapshots input state and begins generation of new items. Until `response_done` signifies the end of the response, a response can create items via `response_output_item_added` that are then populated via `delta` commands. |
141+
| `response_done` | Notifies that a response generation is complete for a conversation. |
142+
| `response_cancelled` | Confirms that a response was canceled in response to a caller-initiated or internal signal. |
143+
| `rate_limits_updated` | This response is sent immediately after `response.done`, this property provides the current rate limit information reflecting updated status after the consumption of the just-finished response. |
144+
| **Item Flow in a Response** | |
145+
| `response_output_item_added` | Notifies that a new, server-generated conversation item *is being created*; content will then be populated via incremental `add_content` messages with a final `response_output_item_done` command signifying the item creation has completed. |
146+
| `response_output_item_done` | Notifies that a new conversation item is added to a conversation. For model-generated messages, this property is preceded by `response_output_item_added` and `delta` commands which begin and populate the new item, respectively. |
147+
| **Content Flow within Response Items** | |
148+
| `response_content_part_added` | Notifies that a new content part is being created within a conversation item in an ongoing response. Until `response_content_part_done` arrives, content is then incrementally provided via the appropriate `delta` commands. |
149+
| `response_content_part_done` | Signals that a newly created content part is complete and receives no further incremental updates. |
150+
| `response_audio_delta` | Provides an incremental update to a binary audio data content part generated by the model. |
151+
| `response_audio_done` | Signals that an audio content part's incremental updates are complete. |
152+
| `response_audio_transcript_delta` | Provides an incremental update to the audio transcription associated with the output audio content generated by the model. |
153+
| `response_audio_transcript_done` | Signals that the incremental updates to audio transcription of output audio are complete. |
154+
| `response_text_delta` | Provides an incremental update to a text content part within a conversation message item. |
155+
| `response_text_done` | Signals that the incremental updates to a text content part are complete. |
156+
| `response_function_call_arguments_delta` | Provides an incremental update to the arguments of a function call, as represented within an item in a conversation. |
157+
| `response_function_call_arguments_done` | Signals that incremental function call arguments are complete and that accumulated arguments can now be used in their entirety. |
158+
| **User Input Audio** | |
159+
| `input_audio_buffer_speech_started` | When you use configured voice activity detection, this command notifies that a start of user speech is detected within the input audio buffer at a specific audio sample index. |
160+
| `input_audio_buffer_speech_stopped` | When you use configured voice activity detection, this command notifies that an end of user speech is detected within the input audio buffer at a specific audio sample index. This setting automatically triggers response generation when configured. |
161+
| `item_input_audio_transcription_completed` | Notifies that a supplementary transcription of the user's input audio buffer is available. This behavior must be opted into via the `input_audio_transcription` property in `session.update`. |
162+
| `item_input_audio_transcription_failed` | Notifies that input audio transcription failed. |
163+
| `input_audio_buffer_committed` | Provides acknowledgment that the current state of the user audio input buffer is submitted to subscribed conversations. |
164+
| `input_audio_buffer_cleared` | Provides acknowledgment that the pending user audio input buffer is cleared. |
165+
| **Other** | |
166+
| `error` | Indicates that something went wrong while processing data on the session. Includes an `error` message that provides more detail. |
57167

58168

59169

0 commit comments

Comments
 (0)