You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/models.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -229,7 +229,7 @@ The `gpt-4o-realtime-preview` model is part of the GPT-4o model family and suppo
229
229
230
230
GPT-4o audio is available in the East US 2 (`eastus2`) and Sweden Central (`swedencentral`) regions. To use GPT-4o audio, you need to [create](../how-to/create-resource.md) or use an existing resource in one of the supported regions.
231
231
232
-
When your resource is created, you can [deploy](../how-to/create-resource.md#deploy-a-model) the GPT-4o audio model. If you are performing a programmatic deployment, the **model** name is `gpt-4o-realtime-preview`. For more information on how to use GPT-4o audio, see the [GPT-4o audio documentation](../how-to/audio-real-time.md).
232
+
When your resource is created, you can [deploy](../how-to/create-resource.md#deploy-a-model) the GPT-4o audio model. If you are performing a programmatic deployment, the **model** name is `gpt-4o-realtime-preview`. For more information on how to use GPT-4o audio, see the [GPT-4o audio documentation](../realtime-audio-quickstart.md).
233
233
234
234
Details about maximum request tokens and training data are available in the following table.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/realtime-audio-quickstart.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ Most users of the Realtime API need to deliver and receive audio from an end-use
23
23
24
24
Currently only `gpt-4o-realtime-preview` version: `2024-10-01-preview` supports real-time audio.
25
25
26
-
The `gpt-4o-realtime-preview` model is available for global deployments in [East US 2 and Sweden Central regions](../concepts/models.md#global-standard-model-availability).
26
+
The `gpt-4o-realtime-preview` model is available for global deployments in [East US 2 and Sweden Central regions](./concepts/models.md#global-standard-model-availability).
27
27
28
28
> [!IMPORTANT]
29
29
> The system stores your prompts and completions as described in the "Data Use and Access for Abuse Monitoring" section of the service-specific Product Terms for Azure OpenAI Service, except that the Limited Exception does not apply. Abuse monitoring will be turned on for use of the `gpt-4o-realtime-preview` API even for customers who otherwise are approved for modified abuse monitoring.
@@ -38,13 +38,13 @@ Support for the Realtime API was first added in API version `2024-10-01-preview`
38
38
## Prerequisites
39
39
40
40
- An Azure subscription - <ahref="https://azure.microsoft.com/free/cognitive-services"target="_blank">Create one for free</a>.
41
-
- An Azure OpenAI resource created in a [supported region](#supported-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
41
+
- An Azure OpenAI resource created in a [supported region](#supported-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](./how-to/create-resource.md).
42
42
43
43
## Deploy a model for real-time audio
44
44
45
45
Before you can use GPT-4o real-time audio, you need a deployment of the `gpt-4o-realtime-preview` model in a supported region as described in the [supported models](#supported-models) section.
46
46
47
-
You can deploy the model from the [Azure AI Studio model catalog](../../../ai-studio/how-to/model-catalog-overview.md) or from your project in AI Studio. Follow these steps to deploy a `gpt-4o-realtime-preview` model from the model catalog:
47
+
You can deploy the model from the [Azure AI Studio model catalog](../../ai-studio/how-to/model-catalog-overview.md) or from your project in AI Studio. Follow these steps to deploy a `gpt-4o-realtime-preview` model from the model catalog:
48
48
49
49
1. Sign in to [AI Studio](https://ai.azure.com) and go to the **Home** page.
50
50
1. Select **Model catalog** from the left sidebar.
@@ -71,13 +71,13 @@ To chat with your deployed `gpt-4o-realtime-preview` model in the [Azure AI Stud
71
71
1. Select your deployed `gpt-4o-realtime-preview` model from the **Deployment** dropdown.
72
72
1. Select **Enable microphone** to allow the browser to access your microphone. If you already granted permission, you can skip this step.
73
73
74
-
:::image type="content" source="../media/how-to/real-time/real-time-playground.png" alt-text="Screenshot of the real-time audio playground with the deployed model selected." lightbox="../media/how-to/real-time/real-time-playground.png":::
74
+
:::image type="content" source="./media/how-to/real-time/real-time-playground.png" alt-text="Screenshot of the real-time audio playground with the deployed model selected." lightbox="./media/how-to/real-time/real-time-playground.png":::
75
75
76
76
1. Optionally you can edit contents in the **Give the model instructions and context** text box. Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, tell it what it should and shouldn't answer, and tell it how to format responses.
77
77
1. Optionally, change settings such as threshold, prefix padding, and silence duration.
78
78
1. Select **Start listening** to start the session. You can speak into the microphone to start a chat.
79
79
80
-
:::image type="content" source="../media/how-to/real-time/real-time-playground-start-listening.png" alt-text="Screenshot of the real-time audio playground with the start listening button and microphone access enabled." lightbox="../media/how-to/real-time/real-time-playground-start-listening.png":::
80
+
:::image type="content" source="./media/how-to/real-time/real-time-playground-start-listening.png" alt-text="Screenshot of the real-time audio playground with the start listening button and microphone access enabled." lightbox="./media/how-to/real-time/real-time-playground-start-listening.png":::
81
81
82
82
1. You can interrupt the chat at any time by speaking. You can end the chat by selecting the **Stop listening** button.
83
83
@@ -129,5 +129,5 @@ You can run the sample code locally on your machine by following these steps. Re
129
129
130
130
## Related content
131
131
132
-
* Learn more about Azure OpenAI [deployment types](./deployment-types.md)
133
-
* Learn more about Azure OpenAI [quotas and limits](../quotas-limits.md)
132
+
* Learn more about Azure OpenAI [deployment types](./how-to/deployment-types.md)
133
+
* Learn more about Azure OpenAI [quotas and limits](quotas-limits.md)
Copy file name to clipboardExpand all lines: articles/ai-services/openai/whats-new.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,7 +44,7 @@ Azure OpenAI GPT-4o audio is part of the GPT-4o model family that supports low-l
44
44
45
45
The `gpt-4o-realtime-preview` model is available for global deployments in [East US 2 and Sweden Central regions](./concepts/models.md#global-standard-model-availability).
46
46
47
-
For more information, see the [GPT-4o real-time audio documentation](./how-to/audio-real-time.md).
47
+
For more information, see the [GPT-4o real-time audio documentation](realtime-audio-quickstart.md).
0 commit comments