You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-foundry/model-inference/concepts/content-filter.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ manager: nitinme
14
14
# Content filtering for model inference in Azure AI services
15
15
16
16
> [!IMPORTANT]
17
-
> The content filtering system isn't applied to prompts and completions processed by the audio models such as Whisper in Azure OpenAI Service. Learn more about the [Audio API in Azure OpenAI](../../../ai-services/openai/concepts/models.md?tabs=audio#audio-models).
17
+
> The content filtering system isn't applied to prompts and completions processed by the audio models such as Whisper in Azure OpenAI Service. Learn more about the [Audio models in Azure OpenAI](../../../ai-services/openai/concepts/models.md?tabs=standard-audio#standard-models-by-endpoint).
18
18
19
19
Azure AI model inference in Azure AI Services includes a content filtering system that works alongside core models and it's powered by [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety). This system works by running both the prompt and completion through an ensemble of classification models designed to detect and prevent the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design might affect completions and thus filtering behavior.
description: Learn about the audio capabilities of Azure OpenAI Service.
5
+
author: eric-urban
6
+
ms.author: eur
7
+
ms.service: azure-ai-openai
8
+
ms.topic: conceptual
9
+
ms.date: 4/15/2025
10
+
ms.custom: template-concept
11
+
manager: nitinme
12
+
---
13
+
14
+
# Audio capabilities in Azure OpenAI Service
15
+
16
+
> [!IMPORTANT]
17
+
> The content filtering system isn't applied to prompts and completions processed by the audio models such as Whisper in Azure OpenAI Service.
18
+
19
+
Audio models in Azure OpenAI are available via the `realtime`, `completions`, and `audio` APIs. The audio models are designed to handle a variety of tasks, including speech recognition, translation, and text to speech.
20
+
21
+
For information about the available audio models per region in Azure OpenAI Service, see the [audio models](models.md?tabs=standard-audio#standard-models-by-endpoint), [standard models by endpoint](models.md?tabs=standard-audio#standard-models-by-endpoint), and [global standard model availability](models.md?tabs=standard-audio#global-standard-model-availability) documentation.
22
+
23
+
### GPT-4o audio Realtime API
24
+
25
+
GPT-4o real-time audio is designed to handle real-time, low-latency conversational interactions, making it a great fit for support agents, assistants, translators, and other use cases that need highly responsive back-and-forth with a user. For more information on how to use GPT-4o real-time audio, see the [GPT-4o real-time audio quickstart](../realtime-audio-quickstart.md) and [how to use GPT-4o audio](../how-to/realtime-audio.md).
26
+
27
+
## GPT-4o audio completions
28
+
29
+
GPT-4o audio completion is designed to generate audio from audio or text prompts, making it a great fit for generating audio books, audio content, and other use cases that require audio generation. The GPT-4o audio completions model introduces the audio modality into the existing `/chat/completions` API. For more information on how to use GPT-4o audio completions, see the [audio generation quickstart](../audio-completions-quickstart.md).
30
+
31
+
## Audio API
32
+
33
+
The audio models via the `/audio` API can be used for speech to text, translation, and text to speech. To get started with the audio API, see the [Whisper quickstart](../whisper-quickstart.md) for speech to text.
34
+
35
+
> [!NOTE]
36
+
> To help you decide whether to use Azure AI Speech or Azure OpenAI Service, see the [Azure AI Speech batch transcription](../../speech-service/batch-transcription-create.md), [What is the Whisper model?](../../speech-service/whisper-overview.md), and [OpenAI text to speech voices](../../speech-service/openai-voices.md#openai-text-to-speech-voices-via-azure-openai-service-or-via-azure-ai-speech) guides.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/content-filter.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ manager: nitinme
14
14
# Content filtering
15
15
16
16
> [!IMPORTANT]
17
-
> The content filtering system isn't applied to prompts and completions processed by the audio models such as Whisper in Azure OpenAI Service. Learn more about the [Audio API in Azure OpenAI](models.md?tabs=audio#audio-models).
17
+
> The content filtering system isn't applied to prompts and completions processed by the audio models such as Whisper in Azure OpenAI Service. Learn more about the [Audio models in Azure OpenAI](models.md?tabs=standard-audio#standard-models-by-endpoint).
18
18
19
19
Azure OpenAI Service includes a content filtering system that works alongside core models, including DALL-E image generation models. This system works by running both the prompt and completion through an ensemble of classification models designed to detect and prevent the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Variations in API configurations and application design might affect completions and thus filtering behavior.
@@ -22,12 +22,11 @@ Azure OpenAI Service is powered by a diverse set of models with different capabi
22
22
|[GPT-4.5 Preview](#gpt-45-preview)|The latest GPT model that excels at diverse text and image tasks. |
23
23
|[o-series models](#o-series-models)|[Reasoning models](../how-to/reasoning.md) with advanced problem-solving and increased focus and capability. |
24
24
|[GPT-4o & GPT-4o mini & GPT-4 Turbo](#gpt-4o-and-gpt-4-turbo)| The latest most capable Azure OpenAI models with multimodal versions, which can accept both text and images as input. |
25
-
|[GPT-4o audio](#gpt-4o-audio)| GPT-4o audio models that support either low-latency, "speech in, speech out" conversational interactions or audio generation. |
26
25
|[GPT-4](#gpt-4)| A set of models that improve on GPT-3.5 and can understand and generate natural language and code. |
27
26
|[GPT-3.5](#gpt-35)| A set of models that improve on GPT-3 and can understand and generate natural language and code. |
28
27
|[Embeddings](#embeddings-models)| A set of models that can convert text into numerical vector form to facilitate text similarity. |
29
28
|[DALL-E](#dall-e-models)| A series of models that can generate original images from natural language. |
30
-
|[Audio](#audio-models)| A series of models for speech to text, translation, and text to speech. |
29
+
|[Audio](#audio-models)| A series of models for speech to text, translation, and text to speech. GPT-4o audio models support either low-latency, "speech in, speech out" conversational interactions or audio generation. |
31
30
32
31
## computer-use-preview
33
32
@@ -98,40 +97,6 @@ To learn more about the advanced `o-series` models see, [getting started with re
98
97
|`o1-preview`| See the [models table](#model-summary-table-and-region-availability). This model is only available for customers who were granted access as part of the original limited access |
99
98
|`o1-mini`| See the [models table](#model-summary-table-and-region-availability). |
100
99
101
-
## GPT-4o audio
102
-
103
-
The GPT 4o audio models are part of the GPT-4o model family and support either low-latency, "speech in, speech out" conversational interactions or audio generation.
104
-
- GPT-4o real-time audio is designed to handle real-time, low-latency conversational interactions, making it a great fit for support agents, assistants, translators, and other use cases that need highly responsive back-and-forth with a user. For more information on how to use GPT-4o real-time audio, see the [GPT-4o real-time audio quickstart](../realtime-audio-quickstart.md) and [how to use GPT-4o audio](../how-to/realtime-audio.md).
105
-
- GPT-4o audio completion is designed to generate audio from audio or text prompts, making it a great fit for generating audio books, audio content, and other use cases that require audio generation. The GPT-4o audio completions model introduces the audio modality into the existing `/chat/completions` API. For more information on how to use GPT-4o audio completions, see the [audio generation quickstart](../audio-completions-quickstart.md).
106
-
107
-
> [!CAUTION]
108
-
> We don't recommend using preview models in production. We will upgrade all deployments of preview models to either future preview versions or to the latest stable GA version. Models that are designated preview don't follow the standard Azure OpenAI model lifecycle.
109
-
110
-
To use GPT-4o audio, you need [an Azure OpenAI resource](../how-to/create-resource.md) in one of the [supported regions](#global-standard-model-availability).
111
-
112
-
When your resource is created, you can [deploy](../how-to/create-resource.md#deploy-a-model) the GPT-4o audio model.
113
-
114
-
Details about maximum request tokens and training data are available in the following table.
115
-
116
-
| Model ID | Description | Max Request (tokens) | Training Data (up to) |
117
-
|---|---|---|---|
118
-
|`gpt-4o-mini-audio-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for audio and text generation. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
119
-
|`gpt-4o-mini-realtime-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
120
-
|`gpt-4o-audio-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for audio and text generation. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
121
-
|`gpt-4o-realtime-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
122
-
|`gpt-4o-realtime-preview` (2024-10-01) <br> **GPT-4o audio**|**Audio model** for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
123
-
124
-
### Region availability
125
-
126
-
| Model | Region |
127
-
|---|---|
128
-
|`gpt-4o-mini-audio-preview`| East US2 (Global Standard) |
129
-
|`gpt-4o-mini-realtime-preview`| East US2 (Global Standard) <br> Sweden Central (Global Standard) |
130
-
|`gpt-4o-audio-preview`| East US2 (Global Standard) <br> Sweden Central (Global Standard) |
131
-
|`gpt-4o-realtime-preview`| East US2 (Global Standard) <br> Sweden Central (Global Standard) |
132
-
133
-
To compare the availability of GPT-4o audio models across all regions, see the [models table](#global-standard-model-availability).
134
-
135
100
## GPT-4o and GPT-4 Turbo
136
101
137
102
GPT-4o integrates text and images in a single model, enabling it to handle multiple data types simultaneously. This multimodal approach enhances accuracy and responsiveness in human-computer interactions. GPT-4o matches GPT-4 Turbo in English text and coding tasks while offering superior performance in non-English languages and vision tasks, setting new benchmarks for AI capabilities.
@@ -235,11 +200,56 @@ OpenAI's MTEB benchmark testing found that even when the third generation model'
235
200
236
201
The DALL-E models generate images from text prompts that the user provides. DALL-E 3 is generally available for use with the REST APIs. DALL-E 2 and DALL-E 3 with client SDKs are in preview.
237
202
238
-
## Audio API models
203
+
## Audio models
204
+
205
+
Audio models in Azure OpenAI are available via the `realtime`, `completions`, and `audio` APIs.
206
+
207
+
### GPT-4o audio models
208
+
209
+
The GPT 4o audio models are part of the GPT-4o model family and support either low-latency, "speech in, speech out" conversational interactions or audio generation.
210
+
211
+
> [!CAUTION]
212
+
> We don't recommend using preview models in production. We will upgrade all deployments of preview models to either future preview versions or to the latest stable GA version. Models that are designated preview don't follow the standard Azure OpenAI model lifecycle.
213
+
214
+
Details about maximum request tokens and training data are available in the following table.
215
+
216
+
| Model ID | Description | Max Request (tokens) | Training Data (up to) |
217
+
|---|---|---|---|
218
+
|`gpt-4o-mini-audio-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for audio and text generation. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
219
+
|`gpt-4o-mini-realtime-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
220
+
|`gpt-4o-audio-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for audio and text generation. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
221
+
|`gpt-4o-realtime-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
222
+
|`gpt-4o-realtime-preview` (2024-10-01) <br> **GPT-4o audio**|**Audio model** for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
223
+
224
+
To compare the availability of GPT-4o audio models across all regions, see the [models table](#global-standard-model-availability).
225
+
226
+
### Audio API
239
227
240
228
The audio models via the `/audio` API can be used for speech to text, translation, and text to speech.
241
229
242
-
For more information see [Audio models](#audio-models) in this article.
230
+
#### Speech to text models
231
+
232
+
| Model ID | Description | Max Request (audio file size) |
|`gpt-4o-transcribe`| Speech to text powered by GPT-4o. | 25 MB|
401
-
|`gpt-4o-mini-transcribe`| Speech to text powered by GPT-4o mini. | 25 MB|
402
-
403
-
You can also use the Whisper model via Azure AI Speech [batch transcription](../../speech-service/batch-transcription-create.md) API. Check out [What is the Whisper model?](../../speech-service/whisper-overview.md) to learn more about when to use Azure AI Speech vs. Azure OpenAI Service.
404
-
405
-
### Speech translation models
406
-
407
-
| Model ID | Description | Max Request (audio file size) |
|`gpt-4o-mini-tts`| Text to speech model powered by GPT-4o mini. |
418
-
419
-
You can also use the OpenAI text to speech voices via Azure AI Speech. To learn more, see [OpenAI text to speech voices via Azure OpenAI Service or via Azure AI Speech](../../speech-service/openai-voices.md#openai-text-to-speech-voices-via-azure-openai-service-or-via-azure-ai-speech) guide.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/text-to-speech-dotnet.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ recommendations: false
12
12
## Prerequisites
13
13
14
14
- An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true).
15
-
- An Azure OpenAI resource with a text to speech model (such as `tts`) deployed in a [supported region](../concepts/models.md?tabs=audio#audio-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
15
+
- An Azure OpenAI resource with a text to speech model (such as `tts`) deployed in a [supported region](../concepts/models.md?tabs=standard-audio#standard-models-by-endpoint). For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/whisper-dotnet.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.date: 3/11/2025
10
10
## Prerequisites
11
11
12
12
- An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true).
13
-
- An Azure OpenAI resource with a speech to text model deployed in a [supported region](../concepts/models.md?tabs=audio#audio-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
13
+
- An Azure OpenAI resource with a speech to text model deployed in a [supported region](../concepts/models.md?tabs=standard-audio#standard-models-by-endpoint). For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
Copy file name to clipboardExpand all lines: articles/ai-services/openai/includes/whisper-javascript.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,7 +16,7 @@ author: eric-urban
16
16
- An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true)
17
17
-[LTS versions of Node.js](https://github.com/nodejs/release#release-schedule)
18
18
-[Azure CLI](/cli/azure/install-azure-cli) used for passwordless authentication in a local development environment, create the necessary context by signing in with the Azure CLI.
19
-
- An Azure OpenAI resource with a speech to text model deployed in a [supported region](../concepts/models.md?tabs=audio#audio-models). For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
19
+
- An Azure OpenAI resource with a speech to text model deployed in a [supported region](../concepts/models.md?tabs=standard-audio#standard-models-by-endpoint). For more information, see [Create a resource and deploy a model with Azure OpenAI](../how-to/create-resource.md).
0 commit comments