You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/quickstarts/voice-live-agents/intro.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,4 +16,4 @@ You can create and run an application to use voice live with agents for real-tim
16
16
17
17
- This separation also supports better maintainability and scalability for scenarios where multiple conversational experiences or business logic variations are needed.
18
18
19
-
To instead use the voice live API without agents, see the [voice live quickstart](/azure/ai-services/speech-service/voice-live-quickstart).
19
+
To instead use the Voice live API without agents, see the [Voice live API quickstart](/azure/ai-services/speech-service/voice-live-quickstart).
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/quickstarts/voice-live-api/intro.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,9 @@
1
1
---
2
-
author: PatrickFarley
2
+
author: goergenj
3
3
ms.service: azure-ai-speech
4
4
ms.topic: include
5
-
ms.date: 7/31/2025
6
-
ms.author: pafarley
5
+
ms.date: 9/26/2025
6
+
ms.author: jagoerge
7
7
---
8
8
9
9
You create and run an application to use voice live directly with generative AI models for real-time voice agents.
@@ -16,4 +16,4 @@ You create and run an application to use voice live directly with generative AI
16
16
17
17
- Direct model use is suitable for scenarios where agent-level abstraction or built-in logic is unnecessary.
18
18
19
-
To instead use the voice live API with agents, see the [voice live agents quickstart](/azure/ai-services/speech-service/voice-live-agents-quickstart).
19
+
To instead use the Voice live API with agents, see the [Voice live API agents quickstart](/azure/ai-services/speech-service/voice-live-agents-quickstart).
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/regions.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ author: goergenj
6
6
manager: nitinme
7
7
ms.service: azure-ai-speech
8
8
ms.topic: conceptual
9
-
ms.date: 9/16/2025
9
+
ms.date: 9/26/2025
10
10
ms.author: jagoerge
11
11
ms.custom: references_regions
12
12
#Customer intent: As a developer, I want to learn about the available regions and endpoints for the Speech service.
@@ -187,9 +187,9 @@ The regions in these tables support most of the core features of the Speech serv
187
187
|uksouth| - | - | Global standard | Global standard | Global standard | Global standard | - | - | - | - | - | - |
188
188
|westeurope| - | - | Data zone standard | Data zone standard | Data zone standard | Data zone standard | - | - | - | - | - | - |
189
189
190
-
<sup>1</sup> The Azure AI Foundry resource must be in Central India. Azure AI Speech features remain in Central India. The voice live API uses Sweden Central as needed for generative AI load balancing.
190
+
<sup>1</sup> The Azure AI Foundry resource must be in Central India. Azure AI Speech features remain in Central India. The Voice live API uses Sweden Central as needed for generative AI load balancing.
191
191
192
-
<sup>2</sup> The Azure AI Foundry resource must be in West US 2. Azure AI Speech features remain in West US 2. The voice live API uses East US 2 as needed for generative AI load balancing.
192
+
<sup>2</sup> The Azure AI Foundry resource must be in West US 2. Azure AI Speech features remain in West US 2. The Voice live API uses East US 2 as needed for generative AI load balancing.
# Customer intent: As a developer, I want to learn how to use custom models with the voice live API for real-time voice agents.
12
+
# Customer intent: As a developer, I want to learn how to use custom models with the Voice live API for real-time voice agents.
13
13
---
14
14
15
15
# How to customize voice live input and output
@@ -69,7 +69,7 @@ Example session configuration with custom speech models. In this case, if the de
69
69
```
70
70
71
71
> [!NOTE]
72
-
> In order to use a custom speech model with voice live API, the model must be available on the same Azure AI Foundry resource you are using to call the voice live API. If you trained the model on a different Azure AI Foundry or Azure AI Speech resource you have to copy the model to the resource you are using to call the voice live API.
72
+
> In order to use a custom speech model with Voice live API, the model must be available on the same Azure AI Foundry resource you are using to call the Voice live API. If you trained the model on a different Azure AI Foundry or Azure AI Speech resource you have to copy the model to the resource you are using to call the Voice live API.
73
73
> You pay separately for custom speech training and model hosting.
74
74
75
75
## Speech output customization
@@ -105,21 +105,21 @@ You can use a custom voice for audio output. For information about how to create
105
105
```
106
106
107
107
> [!NOTE]
108
-
> In order to use a custom voice model with voice live API, the model must be available on the same Azure AI Foundry resource you are using to call the voice live API. If you trained the model on a different Azure AI Foundry or Azure AI Speech resource you have to copy the model to the resource you are using to call the voice live API.
108
+
> In order to use a custom voice model with Voice live API, the model must be available on the same Azure AI Foundry resource you are using to call the Voice live API. If you trained the model on a different Azure AI Foundry or Azure AI Speech resource you have to copy the model to the resource you are using to call the Voice live API.
109
109
> You pay separately for custom voice training and model hosting.
110
110
111
111
### Azure custom avatar
112
112
113
113
[Text to speech avatar](./text-to-speech-avatar/what-is-text-to-speech-avatar.md) converts text into a digital video of a photorealistic human (either a standard avatar or a [custom text to speech avatar](./text-to-speech-avatar/what-is-custom-text-to-speech-avatar.md)) speaking with a natural-sounding voice.
114
114
115
-
The configuration for a custom avatar does not differ from the configuration of a standard avatar. Please refer to [How to use the voice live API - Azure text to speech avatar](./voice-live-how-to.md#azure-text-to-speech-avatar) for a detailed example.
115
+
The configuration for a custom avatar does not differ from the configuration of a standard avatar. Please refer to [How to use the Voice live API - Azure text to speech avatar](./voice-live-how-to.md#azure-text-to-speech-avatar) for a detailed example.
116
116
117
117
> [!NOTE]
118
-
> In order to use a custom voice model with voice live API, the model must be available on the same Azure AI Foundry resource you are using to call the voice live API. If you trained the model on a different Azure AI Foundry or Azure AI Speech resource you have to copy the model to the resource you are using to call the voice live API.
118
+
> In order to use a custom voice model with Voice live API, the model must be available on the same Azure AI Foundry resource you are using to call the Voice live API. If you trained the model on a different Azure AI Foundry or Azure AI Speech resource you have to copy the model to the resource you are using to call the Voice live API.
119
119
> You pay separately for custom avatar training and model hosting.
120
120
121
121
122
122
## Related content
123
123
124
124
- Try out the [Voice live API quickstart](./voice-live-quickstart.md)
125
-
- Learn more about [How to use the voice live API](./voice-live-how-to.md)
125
+
- Learn more about [How to use the Voice live API](./voice-live-how-to.md)
The voice live API provides a capable WebSocket interface compared to the [Azure OpenAI Realtime API](../../ai-foundry/openai/how-to/realtime-audio.md).
19
+
The Voice live API provides a capable WebSocket interface compared to the [Azure OpenAI Realtime API](../../ai-foundry/openai/how-to/realtime-audio.md).
20
20
21
-
Unless otherwise noted, the voice live API uses the [same events](/azure/ai-foundry/openai/realtime-audio-reference?context=/azure/ai-services/speech-service/context/context) as the Azure OpenAI Realtime API. This document provides a reference for the event message properties that are specific to the voice live API.
21
+
Unless otherwise noted, the Voice live API uses the [same events](/azure/ai-foundry/openai/realtime-audio-reference?context=/azure/ai-services/speech-service/context/context) as the Azure OpenAI Realtime API. This document provides a reference for the event message properties that are specific to the Voice live API.
22
22
23
23
## Supported models and regions
24
24
25
-
For a table of supported models and regions, see the [voice live API overview](./voice-live.md#supported-models-and-regions).
25
+
For a table of supported models and regions, see the [Voice live API overview](./voice-live.md#supported-models-and-regions).
26
26
27
27
## Authentication
28
28
29
-
An [Azure AI Foundry resource](../multi-service-resource.md) is required to access the voice live API.
29
+
An [Azure AI Foundry resource](../multi-service-resource.md) is required to access the Voice live API.
30
30
31
31
### WebSocket endpoint
32
32
33
-
The WebSocket endpoint for the voice live API is `wss://<your-ai-foundry-resource-name>.services.ai.azure.com/voice-live/realtime?api-version=2025-10-01` or, for older resources, `wss://<your-ai-foundry-resource-name>.cognitiveservices.azure.com/voice-live/realtime?api-version=2025-10-01`.
33
+
The WebSocket endpoint for the Voice live API is `wss://<your-ai-foundry-resource-name>.services.ai.azure.com/voice-live/realtime?api-version=2025-10-01` or, for older resources, `wss://<your-ai-foundry-resource-name>.cognitiveservices.azure.com/voice-live/realtime?api-version=2025-10-01`.
34
34
The endpoint is the same for all models. The only difference is the required `model` query parameter, or, when using the Agent service, the `agent_id` and `project_id` parameters.
35
35
36
36
For example, an endpoint for a resource with a custom domain would be `wss://<your-ai-foundry-resource-name>.services.ai.azure.com/voice-live/realtime?api-version=2025-10-01&model=gpt-realtime`
37
37
38
38
### Credentials
39
39
40
-
The voice live API supports two authentication methods:
40
+
The Voice live API supports two authentication methods:
41
41
42
42
-**Microsoft Entra** (recommended): Use token-based authentication for an Azure AI Foundry resource. Apply a retrieved authentication token using a `Bearer` token with the `Authorization` header.
43
43
-**API key**: An `api-key` can be provided in one of two ways:
@@ -52,7 +52,7 @@ For the recommended keyless authentication with Microsoft Entra ID, you need to:
52
52
53
53
## Session configuration
54
54
55
-
Often, the first event sent by the caller on a newly established voice live API session is the [`session.update`](../openai/realtime-audio-reference.md?context=/azure/ai-services/speech-service/context/context#realtimeclienteventsessionupdate) event. This event controls a wide set of input and output behavior, with output and response generation properties then later overridable using the [`response.create`](../openai/realtime-audio-reference.md?context=/azure/ai-services/speech-service/context/context#realtimeclienteventresponsecreate) event.
55
+
Often, the first event sent by the caller on a newly established Voice live API session is the [`session.update`](../openai/realtime-audio-reference.md?context=/azure/ai-services/speech-service/context/context#realtimeclienteventsessionupdate) event. This event controls a wide set of input and output behavior, with output and response generation properties then later overridable using the [`response.create`](../openai/realtime-audio-reference.md?context=/azure/ai-services/speech-service/context/context#realtimeclienteventresponsecreate) event.
56
56
57
57
Here's an example `session.update` message that configures several aspects of the session, including turn detection, input audio processing, and voice output. Most session parameters are optional and can be omitted if not needed.
58
58
@@ -91,7 +91,7 @@ The server responds with a [`session.updated`](../openai/realtime-audio-referenc
91
91
The following sections describe the properties of the `session` object that can be configured in the `session.update` message.
92
92
93
93
> [!TIP]
94
-
> For comprehensive descriptions of supported events and properties, see the [Azure OpenAI Realtime API events reference documentation](../openai/realtime-audio-reference.md?context=/azure/ai-services/speech-service/context/context). This document provides a reference for the event message properties that are enhancements via the voice live API.
94
+
> For comprehensive descriptions of supported events and properties, see the [Azure OpenAI Realtime API events reference documentation](../openai/realtime-audio-reference.md?context=/azure/ai-services/speech-service/context/context). This document provides a reference for the event message properties that are enhancements via the Voice live API.
95
95
96
96
### Input audio properties
97
97
@@ -125,11 +125,11 @@ Server echo cancellation enhances the input audio quality by removing the echo f
125
125
126
126
## Conversational enhancements
127
127
128
-
The voice live API offers conversational enhancements to provide robustness to the natural end-user conversation flow.
128
+
The Voice live API offers conversational enhancements to provide robustness to the natural end-user conversation flow.
129
129
130
130
### Turn Detection Parameters
131
131
132
-
Turn detection is the process of detecting when the end-user started or stopped speaking. The voice live API builds on the Azure OpenAI Realtime API `turn_detection` property to configure turn detection. The `azure_semantic_vad` type and the advanced `end_of_utterance_detection` are key differentiators between the voice live API and the Azure OpenAI Realtime API.
132
+
Turn detection is the process of detecting when the end-user started or stopped speaking. The Voice live API builds on the Azure OpenAI Realtime API `turn_detection` property to configure turn detection. The `azure_semantic_vad` type and the advanced `end_of_utterance_detection` are key differentiators between the Voice live API and the Azure OpenAI Realtime API.
133
133
134
134
| Property | Type | Required or optional | Description |
135
135
|----------|----------|----------|------------|
@@ -139,7 +139,7 @@ Turn detection is the process of detecting when the end-user started or stopped
139
139
|`speech_duration_ms`| integer | Optional | The duration of user's speech audio required to start detection. If not set or under 80 ms, the detector uses a default value of 80 ms. |
140
140
|`silence_duration_ms`| integer | Optional | The duration of user's silence, measured in milliseconds, to detect the end of speech. |
141
141
|`remove_filler_words`| boolean | Optional | Determines whether to remove filler words to reduce the false alarm rate. This property must be set to `true` when using `azure_semantic_vad`.<br/><br/>The default value is `false`. |
142
-
| `end_of_utterance_detection` | object | Optional | Configuration for end of utterance detection. The voice live API offers advanced end-of-turn detection to indicate when the end-user stopped speaking while allowing for natural pauses. End of utterance detection can significantly reduce premature end-of-turn signals without adding user-perceivable latency. End of utterance detection can be used with either VAD selection.<br/><br/>Properties of `end_of_utterance_detection` include:<br/>-`model`: The model to use for end of utterance detection. The supported values are:<br/> `semantic_detection_v1` supporting English.<br/> `semantic_detection_v1_multilingual` supporting English, Spanish, French, Italian, German (DE), Japanese, Portuguese, Chinese, Korean, Hindi.<br/>Other languages will be bypassed.<br/>- `threshold`: Threshold to determine the end of utterance (0.0 to 1.0). The default value is 0.01.<br/>- `timeout`: Timeout in seconds. The default value is 2 seconds. <br/><br/>End of utterance detection currently doesn't support gpt-realtime, gpt-4o-mini-realtime, and phi4-mm-realtime.|
142
+
| `end_of_utterance_detection` | object | Optional | Configuration for end of utterance detection. The Voice live API offers advanced end-of-turn detection to indicate when the end-user stopped speaking while allowing for natural pauses. End of utterance detection can significantly reduce premature end-of-turn signals without adding user-perceivable latency. End of utterance detection can be used with either VAD selection.<br/><br/>Properties of `end_of_utterance_detection` include:<br/>-`model`: The model to use for end of utterance detection. The supported values are:<br/> `semantic_detection_v1` supporting English.<br/> `semantic_detection_v1_multilingual` supporting English, Spanish, French, Italian, German (DE), Japanese, Portuguese, Chinese, Korean, Hindi.<br/>Other languages will be bypassed.<br/>- `threshold`: Threshold to determine the end of utterance (0.0 to 1.0). The default value is 0.01.<br/>- `timeout`: Timeout in seconds. The default value is 2 seconds. <br/><br/>End of utterance detection currently doesn't support gpt-realtime, gpt-4o-mini-realtime, and phi4-mm-realtime.|
143
143
144
144
Here's an example of end of utterance detection in a session object:
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/voice-live-language-support.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,15 @@
1
1
---
2
2
title: Voice live API language support
3
3
titleSuffix: Azure AI services
4
-
description: Learn about the languages supported by voice live API and how to configure them.
4
+
description: Learn about the languages supported by Voice live API and how to configure them.
5
5
manager: nitinme
6
6
author: goergenj
7
7
ms.author: jagoerge
8
8
ms.service: azure-ai-speech
9
9
ms.topic: conceptual
10
-
ms.date: 8/11/2025
10
+
ms.date: 9/26/2025
11
11
ms.custom: languages
12
-
# Customer intent: As a developer, I want to learn about which languages are supported by the voice live API and how to configure them.
12
+
# Customer intent: As a developer, I want to learn about which languages are supported by the Voice live API and how to configure them.
13
13
---
14
14
15
15
# Voice live API supported languages (Preview)
@@ -18,7 +18,7 @@ ms.custom: languages
18
18
19
19
## Introduction
20
20
21
-
The voice live API supports multiple languages and configuration options. In this document, you learn which languages the voice live API supports and how to configure them.
21
+
The Voice live API supports multiple languages and configuration options. In this document, you learn which languages the Voice live API supports and how to configure them.
22
22
23
23
## [Speech input](#tab/speechinput)
24
24
@@ -207,6 +207,6 @@ If *Multilingual Voices* are used, the language output can optionally be control
207
207
208
208
## Related content
209
209
210
-
- Learn more about [How to use the voice live API](./voice-live-how-to.md)
211
-
- Try out the [voice live API quickstart](./voice-live-quickstart.md)
210
+
- Learn more about [How to use the Voice live API](./voice-live-how-to.md)
211
+
- Try out the [Voice live API quickstart](./voice-live-quickstart.md)
212
212
- See the [Voice live API reference](./voicelive-api-reference.md)
0 commit comments