You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/how-to/audio-real-time.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ recommendations: false
15
15
16
16
Azure OpenAI GPT-4o audio is part of the GPT-4o model family that supports low-latency, "speech in, speech out" conversational interactions. The GPT-4o audio `realtime` API is designed to handle real-time, low-latency conversational interactions, making it a great fit for use cases involving live interactions between a user and a model, such as customer support agents, voice assistants, and real-time translators.
17
17
18
-
Most users of this API need to deliver and receive audio from an end-user in real-time, including applications that use WebRTC or a telephony system. The real-time API isn't designed to connect directly to end user devices and relies on client integrations to terminate end user audio streams.
18
+
Most users of this API need to deliver and receive audio from an end-user in realtime, including applications that use WebRTC or a telephony system. The real-time API isn't designed to connect directly to end user devices and relies on client integrations to terminate end user audio streams.
19
19
20
20
## Supported models
21
21
@@ -55,7 +55,7 @@ You can deploy the model from the Azure OpenAI model catalog or from your projec
55
55
1. Modify other default settings depending on your requirements.
56
56
1. Select **Deploy**. You land on the deployment details page.
57
57
58
-
Now that you have a deployment of the `gpt-4o-realtime-preview` model, you can use the playground to interact with the model in real-time. Select **Early access playground** from the list of playgrounds in the left pane.
58
+
Now that you have a deployment of the `gpt-4o-realtime-preview` model, you can use the playground to interact with the model in realtime. Select **Early access playground** from the list of playgrounds in the left pane.
59
59
60
60
## Use the GPT-4o real-time audio API
61
61
@@ -64,7 +64,7 @@ Now that you have a deployment of the `gpt-4o-realtime-preview` model, you can u
64
64
65
65
Right now, the fastest way to get started with GPT-4o real-time audio is to download the sample code from the [Azure OpenAI GPT-4o real-time audio repository on GitHub](https://github.com/azure-samples/aoai-realtime-audio-sdk).
66
66
67
-
The JavaScript web sample demonstrates how to use the GPT-4o real-time audio API to interact with the model in real-time. The sample code includes a simple web interface that captures audio from the user's microphone and sends it to the model for processing. The model responds with text and audio, which the sample code renders in the web interface.
67
+
The JavaScript web sample demonstrates how to use the GPT-4o real-time audio API to interact with the model in realtime. The sample code includes a simple web interface that captures audio from the user's microphone and sends it to the model for processing. The model responds with text and audio, which the sample code renders in the web interface.
0 commit comments