You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Voice Live API provides a capable WebSocket interface compared to the [Azure OpenAI Realtime API](../openai/how-to/realtime-audio.md).
19
+
The Voice Live API provides a capable WebSocket interface compared to the [Azure OpenAI Realtime API](../openai/how-to/realtime-audio.md).
20
20
21
21
Unless otherwise noted, the Voice Live API uses the same events as the [Azure OpenAI Realtime API](/azure/ai-services/openai/realtime-audio-reference?context=/azure/ai-services/speech-service/context/context). This document provides a reference for the event message properties that are specific to the Voice Live API.
22
22
@@ -26,7 +26,7 @@ For a table of supported models and regions, see the [Voice Live API overview](.
26
26
27
27
## Authentication
28
28
29
-
An [Azure AI Foundry resource](../multi-service-resource.md) is required to access the Voice Live API.
29
+
An [Azure AI Foundry resource](../multi-service-resource.md) is required to access the Voice Live API.
30
30
31
31
### WebSocket endpoint
32
32
@@ -66,8 +66,8 @@ Here's an example `session.update` message that configures several aspects of th
@@ -84,10 +84,10 @@ The server responds with a [`session.updated`](../openai/realtime-audio-referenc
84
84
85
85
## Session Properties
86
86
87
-
The following sections describe the properties of the `session` object that can be configured in the `session.update` message.
87
+
The following sections describe the properties of the `session` object that can be configured in the `session.update` message.
88
88
89
89
> [!TIP]
90
-
> For comprehensive descriptions of supported events and properties, see the [Azure OpenAI Realtime API events reference documentation](../openai/realtime-audio-reference.md?context=/azure/ai-services/speech-service/context/context). This document provides a reference for the event message properties that are enhancements via the Voice Live API.
90
+
> For comprehensive descriptions of supported events and properties, see the [Azure OpenAI Realtime API events reference documentation](../openai/realtime-audio-reference.md?context=/azure/ai-services/speech-service/context/context). This document provides a reference for the event message properties that are enhancements via the Voice Live API.
91
91
92
92
### Input audio properties
93
93
@@ -99,7 +99,7 @@ You can use input audio properties to configure the input audio stream.
99
99
|`input_audio_echo_cancellation`| object | Optional | Enhances the input audio quality by removing the echo from the model's own voice without requiring any client-side echo cancellation.<br/><br/>Set the `type` property of `input_audio_echo_cancellation` to enable echo cancellation.<br/><br/>The supported value for `type` is `server_echo_cancellation` which is used when the model's voice is played back to the end-user through a speaker, and the microphone picks up the model's own voice. |
100
100
|`input_audio_noise_reduction`| object | Optional | Enhances the input audio quality by suppressing or removing environmental background noise.<br/><br/>Set the `type` property of `input_audio_noise_reduction` to enable noise suppression.<br/><br/>The supported value for `type` is `azure_deep_noise_suppression` which optimizes for speakers closest to the microphone. |
101
101
102
-
Here's an example of input audio properties is a session object:
102
+
Here's an example of input audio properties is a session object:
103
103
104
104
```json
105
105
{
@@ -137,15 +137,15 @@ The Voice Live API offers conversational enhancements to provide robustness to t
137
137
138
138
### Turn Detection Parameters
139
139
140
-
Turn detection is the process of detecting when the end-user started or stopped speaking. The Voice Live API builds on the Azure OpenAI Realtime API `turn_detection` property to configure turn detection. The `azure_semantic_vad` type is one differentiator between the Voice Live API and the Azure OpenAI Realtime API.
140
+
Turn detection is the process of detecting when the end-user started or stopped speaking. The Voice Live API builds on the Azure OpenAI Realtime API `turn_detection` property to configure turn detection. The `azure_semantic_vad` type is one differentiator between the Voice Live API and the Azure OpenAI Realtime API.
141
141
142
142
| Property | Type | Required or optional | Description |
143
143
|----------|----------|----------|------------|
144
144
|`type`| string | Optional | The type of turn detection system to use. Type `server_vad` detects start and end of speech based on audio volume.<br/><br/>Type `azure_semantic_vad` detects start and end of speech based on semantic meaning. Azure semantic voice activity detection (VAD) improves turn detection by removing filler words to reduce the false alarm rate. The current list of filler words are `['ah', 'umm', 'mm', 'uh', 'huh', 'oh', 'yeah', 'hmm']`. The service ignores these words when there's an ongoing response. Remove feature words feature assumes the client plays response audio as soon as it receives them. The `azure_semantic_vad` type isn't supported with the `gpt-4o-realtime-preview` and `gpt-4o-mini-realtime-preview` models.<br/><br/>The default value is `server_vad`. |
145
145
|`threshold`| number | Optional | A higher threshold requires a higher confidence signal of the user trying to speak. |
146
146
|`prefix_padding_ms`| integer | Optional | The amount of audio, measured in milliseconds, to include before the start of speech detection signal. |
147
147
|`silence_duration_ms`| integer | Optional | The duration of user's silence, measured in milliseconds, to detect the end of speech. |
148
-
|`end_of_utterance_detection`| object | Optional | Configuration for end of utterance detection. The Voice Live API offers advanced end-of-turn detection to indicate when the end-user stopped speaking while allowing for natural pauses. End of utterance detection can significantly reduce premature end-of-turn signals without adding user-perceivable latency. End of utterance detection is only available when using `azure_semantic_vad`.<br/><br/>Properties of `end_of_utterance_detection` include:<br/>-`model`: The model to use for end of utterance detection. The supported value is `semantic_detection_v1`.<br/>- `threshold`: Threshold to determine the end of utterance (0.0 to 1.0). The default value is 0.1.<br/>- `timeout`: Timeout in seconds. The default value is 4 seconds.|
148
+
|`end_of_utterance_detection`| object | Optional | Configuration for end of utterance detection. The Voice Live API offers advanced end-of-turn detection to indicate when the end-user stopped speaking while allowing for natural pauses. End of utterance detection can significantly reduce premature end-of-turn signals without adding user-perceivable latency. End of utterance detection is only available when using `azure_semantic_vad`.<br/><br/>Properties of `end_of_utterance_detection` include:<br/>-`model`: The model to use for end of utterance detection. The supported value is `semantic_detection_v1`.<br/>- `threshold`: Threshold to determine the end of utterance (0.0 to 1.0). The default value is 0.01.<br/>- `timeout`: Timeout in seconds. The default value is 2 seconds.|
149
149
150
150
Here's an example of end of utterance detection in a session object:
151
151
@@ -160,8 +160,8 @@ Here's an example of end of utterance detection in a session object:
160
160
"remove_filler_words": false,
161
161
"end_of_utterance_detection": {
162
162
"model": "semantic_detection_v1",
163
-
"threshold": 0.1,
164
-
"timeout": 4
163
+
"threshold": 0.01,
164
+
"timeout": 2
165
165
}
166
166
}
167
167
}
@@ -170,7 +170,7 @@ Here's an example of end of utterance detection in a session object:
170
170
171
171
### Audio output through Azure text to speech
172
172
173
-
You can use the `voice` parameter to specify a standard or custom voice. The voice is used for audio output.
173
+
You can use the `voice` parameter to specify a standard or custom voice. The voice is used for audio output.
174
174
175
175
The `voice` object has the following properties:
176
176
@@ -357,7 +357,7 @@ To configure the viseme, you can set the `animation.outputs` in the `session.upd
357
357
}
358
358
```
359
359
360
-
The `output_audio_timestamp_types` parameter is optional. It configures which audio timestamps should be returned for generated audio. Currently, it only supports `word`.
360
+
The `output_audio_timestamp_types` parameter is optional. It configures which audio timestamps should be returned for generated audio. Currently, it only supports `word`.
361
361
362
362
The service returns the viseme alignment in the response when the audio is generated.
Copy file name to clipboardExpand all lines: articles/machine-learning/toc.yml
+3Lines changed: 3 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -514,6 +514,9 @@ items:
514
514
- name: Configure & submit training run
515
515
displayName: run config, script run config, scriptrunconfig, compute target, dsvm, Data Science Virtual Machine, local, cluster, ACI, container instance, Databricks, data lake, lake, HDI, HDInsight
0 commit comments