Skip to content

Commit b145e87

Browse files
Merge pull request #2059 from eric-urban/eur/realtime-howto
realtime how-to updates
2 parents 5a86b38 + 1a59b45 commit b145e87

File tree

1 file changed

+6
-9
lines changed

1 file changed

+6
-9
lines changed

articles/ai-services/openai/how-to/realtime-audio.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -44,15 +44,10 @@ Before you can use GPT-4o real-time audio, you need:
4444
- An Azure OpenAI resource created in a [supported region](#supported-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](create-resource.md).
4545
- You need a deployment of the `gpt-4o-realtime-preview` model in a supported region as described in the [supported models](#supported-models) section. You can deploy the model from the [Azure AI Foundry portal model catalog](../../../ai-studio/how-to/model-catalog-overview.md) or from your project in AI Foundry portal.
4646

47-
For steps to deploy and use the `gpt-4o-realtime-preview` model, see [the real-time audio quickstart](../realtime-audio-quickstart.md).
48-
49-
For more information about the API and architecture, see the remaining sections in this guide.
50-
51-
## Sample code
52-
53-
Right now, the fastest way to get started development with the GPT-4o Realtime API is to download the sample code from the [Azure OpenAI GPT-4o real-time audio repository on GitHub](https://github.com/azure-samples/aoai-realtime-audio-sdk).
54-
55-
[The Azure-Samples/aisearch-openai-rag-audio repo](https://github.com/Azure-Samples/aisearch-openai-rag-audio) contains an example of how to implement RAG support in applications that use voice as their user interface, powered by the GPT-4o realtime API for audio.
47+
Here are some of the ways you can get started with the GPT-4o Realtime API for speech and audio:
48+
- For steps to deploy and use the `gpt-4o-realtime-preview` model, see [the real-time audio quickstart](../realtime-audio-quickstart.md).
49+
- Download the sample code from the [Azure OpenAI GPT-4o real-time audio repository on GitHub](https://github.com/azure-samples/aoai-realtime-audio-sdk).
50+
- [The Azure-Samples/aisearch-openai-rag-audio repo](https://github.com/Azure-Samples/aisearch-openai-rag-audio) contains an example of how to implement RAG support in applications that use voice as their user interface, powered by the GPT-4o realtime API for audio.
5651

5752
## Connection and authentication
5853

@@ -278,6 +273,8 @@ A user might want to interrupt the assistant's response or ask the assistant to
278273
- Truncating audio deletes the server-side text transcript to ensure there isn't text in the context that the user doesn't know about.
279274
- The server responds with a [`conversation.item.truncated`](../realtime-audio-reference.md#realtimeservereventconversationitemtruncated) event.
280275

276+
277+
281278
## Related content
282279

283280
* Try the [real-time audio quickstart](../realtime-audio-quickstart.md)

0 commit comments

Comments
 (0)