Skip to content

Commit 339cd48

Browse files
committed
init realtime api how to
1 parent a21969d commit 339cd48

File tree

1 file changed

+63
-0
lines changed

1 file changed

+63
-0
lines changed
Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
---
2+
title: 'How to use the GPT-4o Realtime API for speech and audio with Azure OpenAI Service'
3+
titleSuffix: Azure OpenAI
4+
description: Learn how to use the GPT-4o Realtime API for speech and audio with Azure OpenAI Service.
5+
manager: nitinme
6+
ms.service: azure-ai-openai
7+
ms.topic: how-to
8+
ms.date: 11/12/2024
9+
author: eric-urban
10+
ms.author: eur
11+
ms.custom: references_regions
12+
zone_pivot_groups: openai-studio-js
13+
recommendations: false
14+
---
15+
16+
# How to use the GPT-4o Realtime API for speech and audio (Preview)
17+
18+
Azure OpenAI GPT-4o Realtime API for speech and audio is part of the GPT-4o model family that supports low-latency, "speech in, speech out" conversational interactions. The GPT-4o Realtime API is designed to handle real-time, low-latency conversational interactions, making it a great fit for use cases involving live interactions between a user and a model, such as customer support agents, voice assistants, and real-time translators.
19+
20+
Most users of the Realtime API need to deliver and receive audio from an end-user in real time, including applications that use WebRTC or a telephony system. The Realtime API isn't designed to connect directly to end user devices and relies on client integrations to terminate end user audio streams.
21+
22+
## Supported models
23+
24+
Currently only `gpt-4o-realtime-preview` version: `2024-10-01-preview` supports real-time audio.
25+
26+
The `gpt-4o-realtime-preview` model is available for global deployments in [East US 2 and Sweden Central regions](../concepts/models.md#global-standard-model-availability).
27+
28+
> [!IMPORTANT]
29+
> The system stores your prompts and completions as described in the "Data Use and Access for Abuse Monitoring" section of the service-specific Product Terms for Azure OpenAI Service, except that the Limited Exception does not apply. Abuse monitoring will be turned on for use of the `gpt-4o-realtime-preview` API even for customers who otherwise are approved for modified abuse monitoring.
30+
31+
## API support
32+
33+
Support for the Realtime API was first added in API version `2024-10-01-preview`.
34+
35+
> [!NOTE]
36+
> For more information about the API and architecture, see the [Azure OpenAI GPT-4o real-time audio repository on GitHub](https://github.com/azure-samples/aoai-realtime-audio-sdk).
37+
38+
## Prerequisites
39+
40+
41+
## Deploy a model for real-time audio
42+
43+
Before you can use GPT-4o real-time audio, you need:
44+
45+
- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>.
46+
- An Azure OpenAI resource created in a [supported region](#supported-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](create-resource.md).
47+
48+
49+
## Use the GPT-4o real-time audio
50+
51+
You need a deployment of the `gpt-4o-realtime-preview` model in a supported region as described in the [supported models](#supported-models) section. You can deploy the model from the [Azure AI Studio model catalog](../../ai-studio/how-to/model-catalog-overview.md) or from your project in AI Studio.
52+
53+
For steps to deploy and use the `gpt-4o-realtime-preview` model, see [the real-time audio quickstart](../realtime-audio-quickstart.md).
54+
55+
56+
57+
58+
59+
60+
## Related content
61+
62+
* Learn more about Azure OpenAI [deployment types](deployment-types.md)
63+
* Learn more about Azure OpenAI [quotas and limits](quotas-limits.md)

0 commit comments

Comments
 (0)