You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/audio.md
+14-50Lines changed: 14 additions & 50 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,64 +16,28 @@ manager: nitinme
16
16
> [!IMPORTANT]
17
17
> The content filtering system isn't applied to prompts and completions processed by the audio models such as Whisper in Azure OpenAI Service. Learn more about the [Audio API in Azure OpenAI](models.md?tabs=standard-audio#standard-models-by-endpoint).
18
18
19
+
Audio models in Azure OpenAI are available via the `realtime`, `completions`, and `audio` APIs. The audio models are designed to handle a variety of tasks, including speech recognition, translation, and text to speech.
19
20
20
-
### GPT-4o audiomodels
21
+
For information about the available audio models per region in Azure OpenAI Service, see the [audio models](models.md?tabs=standard-audio#standard-models-by-endpoint), [standard models by endpoint](models.md?tabs=standard-audio#standard-models-by-endpoint), and [global standard model availability](models.md?tabs=standard-audio#global-standard-model-availability) documentation.
21
22
22
-
The GPT 4o audio models are part of the GPT-4o model family and support either low-latency, "speech in, speech out" conversational interactions or audio generation.
23
-
- GPT-4o real-time audio is designed to handle real-time, low-latency conversational interactions, making it a great fit for support agents, assistants, translators, and other use cases that need highly responsive back-and-forth with a user. For more information on how to use GPT-4o real-time audio, see the [GPT-4o real-time audio quickstart](../realtime-audio-quickstart.md) and [how to use GPT-4o audio](../how-to/realtime-audio.md).
24
-
- GPT-4o audio completion is designed to generate audio from audio or text prompts, making it a great fit for generating audio books, audio content, and other use cases that require audio generation. The GPT-4o audio completions model introduces the audio modality into the existing `/chat/completions` API. For more information on how to use GPT-4o audio completions, see the [audio generation quickstart](../audio-completions-quickstart.md).
23
+
### GPT-4o audio Realtime API
25
24
26
-
> [!CAUTION]
27
-
> We don't recommend using preview models in production. We will upgrade all deployments of preview models to either future preview versions or to the latest stable GA version. Models that are designated preview don't follow the standard Azure OpenAI model lifecycle.
25
+
GPT-4o real-time audio is designed to handle real-time, low-latency conversational interactions, making it a great fit for support agents, assistants, translators, and other use cases that need highly responsive back-and-forth with a user. For more information on how to use GPT-4o real-time audio, see the [GPT-4o real-time audio quickstart](../realtime-audio-quickstart.md) and [how to use GPT-4o audio](../how-to/realtime-audio.md).
28
26
29
-
To use GPT-4o audio, you need [an Azure OpenAI resource](../how-to/create-resource.md) in one of the [supported regions](#global-standard-model-availability).
27
+
## GPT-4o audio completions
30
28
31
-
When your resource is created, you can [deploy](../how-to/create-resource.md#deploy-a-model)the GPT-4o audio model.
29
+
GPT-4o audio completion is designed to generate audio from audio or text prompts, making it a great fit for generating audio books, audio content, and other use cases that require audio generation. The GPT-4o audio completions model introduces the audio modality into the existing `/chat/completions` API. For more information on how to use GPT-4o audio completions, see the [audio generation quickstart](../audio-completions-quickstart.md).
32
30
33
-
Details about maximum request tokens and training data are available in the following table.
31
+
## Audio API
34
32
35
-
| Model ID | Description | Max Request (tokens) | Training Data (up to) |
36
-
|---|---|---|---|
37
-
|`gpt-4o-mini-audio-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for audio and text generation. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
38
-
|`gpt-4o-mini-realtime-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
39
-
|`gpt-4o-audio-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for audio and text generation. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
40
-
|`gpt-4o-realtime-preview` (2024-12-17) <br> **GPT-4o audio**|**Audio model** for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
41
-
|`gpt-4o-realtime-preview` (2024-10-01) <br> **GPT-4o audio**|**Audio model** for real-time audio processing. |Input: 128,000 <br> Output: 4,096 | Oct 2023 |
42
-
43
-
To compare the availability of GPT-4o audio models across all regions, see the [models table](#global-standard-model-availability).
44
-
45
-
### Audio API
46
-
47
-
The audio models via the `/audio` API can be used for speech to text, translation, and text to speech.
48
-
49
-
#### Speech to text models
50
-
51
-
| Model ID | Description | Max Request (audio file size) |
|`gpt-4o-transcribe`| Speech to text powered by GPT-4o. | 25 MB|
55
-
|`gpt-4o-mini-transcribe`| Speech to text powered by GPT-4o mini. | 25 MB|
56
-
57
-
You can also use the Whisper model via Azure AI Speech [batch transcription](../../speech-service/batch-transcription-create.md) API. Check out [What is the Whisper model?](../../speech-service/whisper-overview.md) to learn more about when to use Azure AI Speech vs. Azure OpenAI Service.
58
-
59
-
#### Speech translation models
60
-
61
-
| Model ID | Description | Max Request (audio file size) |
|`gpt-4o-mini-tts`| Text to speech model powered by GPT-4o mini. |
72
-
73
-
You can also use the OpenAI text to speech voices via Azure AI Speech. To learn more, see [OpenAI text to speech voices via Azure OpenAI Service or via Azure AI Speech](../../speech-service/openai-voices.md#openai-text-to-speech-voices-via-azure-openai-service-or-via-azure-ai-speech) guide.
74
-
75
-
For more information see [Audio models region availability](?tabs=standard-audio#standard-models-by-endpoint) in this article.
33
+
The audio models via the `/audio` API can be used for speech to text, translation, and text to speech. To get started with the audio API, see the [Whisper quickstart](../whisper-quickstart.md) for speech to text.
76
34
35
+
> [!NOTE]
36
+
> To help you decide whether to use Azure AI Speech or Azure OpenAI Service, see the [Azure AI Speech batch transcription](../../speech-service/batch-transcription-create.md), [What is the Whisper model?](../../speech-service/whisper-overview.md), and [OpenAI text to speech voices](../../speech-service/openai-voices.md#openai-text-to-speech-voices-via-azure-openai-service-or-via-azure-ai-speech) guides.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/concepts/models.md
+3-11Lines changed: 3 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -202,19 +202,15 @@ The DALL-E models generate images from text prompts that the user provides. DALL
202
202
203
203
## Audio models
204
204
205
+
Audio models in Azure OpenAI are available via the `realtime`, `completions`, and `audio` APIs.
206
+
205
207
### GPT-4o audio models
206
208
207
209
The GPT 4o audio models are part of the GPT-4o model family and support either low-latency, "speech in, speech out" conversational interactions or audio generation.
208
-
- GPT-4o real-time audio is designed to handle real-time, low-latency conversational interactions, making it a great fit for support agents, assistants, translators, and other use cases that need highly responsive back-and-forth with a user. For more information on how to use GPT-4o real-time audio, see the [GPT-4o real-time audio quickstart](../realtime-audio-quickstart.md) and [how to use GPT-4o audio](../how-to/realtime-audio.md).
209
-
- GPT-4o audio completion is designed to generate audio from audio or text prompts, making it a great fit for generating audio books, audio content, and other use cases that require audio generation. The GPT-4o audio completions model introduces the audio modality into the existing `/chat/completions` API. For more information on how to use GPT-4o audio completions, see the [audio generation quickstart](../audio-completions-quickstart.md).
210
210
211
211
> [!CAUTION]
212
212
> We don't recommend using preview models in production. We will upgrade all deployments of preview models to either future preview versions or to the latest stable GA version. Models that are designated preview don't follow the standard Azure OpenAI model lifecycle.
213
213
214
-
To use GPT-4o audio, you need [an Azure OpenAI resource](../how-to/create-resource.md) in one of the [supported regions](#global-standard-model-availability).
215
-
216
-
When your resource is created, you can [deploy](../how-to/create-resource.md#deploy-a-model) the GPT-4o audio model.
217
-
218
214
Details about maximum request tokens and training data are available in the following table.
219
215
220
216
| Model ID | Description | Max Request (tokens) | Training Data (up to) |
@@ -239,8 +235,6 @@ The audio models via the `/audio` API can be used for speech to text, translatio
239
235
|`gpt-4o-transcribe`| Speech to text powered by GPT-4o. | 25 MB|
240
236
|`gpt-4o-mini-transcribe`| Speech to text powered by GPT-4o mini. | 25 MB|
241
237
242
-
You can also use the Whisper model via Azure AI Speech [batch transcription](../../speech-service/batch-transcription-create.md) API. Check out [What is the Whisper model?](../../speech-service/whisper-overview.md) to learn more about when to use Azure AI Speech vs. Azure OpenAI Service.
243
-
244
238
#### Speech translation models
245
239
246
240
| Model ID | Description | Max Request (audio file size) |
@@ -253,9 +247,7 @@ You can also use the Whisper model via Azure AI Speech [batch transcription](../
253
247
| --- | :--- |
254
248
|`tts`| Text to speech optimized for speed. |
255
249
|`tts-hd`| Text to speech optimized for quality.|
256
-
|`gpt-4o-mini-tts`| Text to speech model powered by GPT-4o mini. |
257
-
258
-
You can also use the OpenAI text to speech voices via Azure AI Speech. To learn more, see [OpenAI text to speech voices via Azure OpenAI Service or via Azure AI Speech](../../speech-service/openai-voices.md#openai-text-to-speech-voices-via-azure-openai-service-or-via-azure-ai-speech) guide.
250
+
|`gpt-4o-mini-tts`| Text to speech model powered by GPT-4o mini.<br/><br/>You can guide the voice to speak in a style or tone. |
259
251
260
252
For more information see [Audio models region availability](?tabs=standard-audio#standard-models-by-endpoint) in this article.
Copy file name to clipboardExpand all lines: articles/ai-services/openai/whats-new.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -79,7 +79,7 @@ The `gpt-4o-mini-audio-preview` (2024-12-17) model is the latest audio completio
79
79
80
80
The `gpt-4o-mini-realtime-preview` (2024-12-17) model is the latest real-time audio model. The real-time models use the same underlying GPT-4o audio model as the completions API, but is optimized for low-latency, real-time audio interactions. For more information, see the [real-time audio quickstart](./realtime-audio-quickstart.md).
81
81
82
-
For more information about available models, see the [models and versions documentation](./concepts/models.md#gpt-4o-audio).
82
+
For more information about available models, see the [models and versions documentation](./concepts/models.md#audio-models).
0 commit comments