You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Azure Communication Services provides developers with Audio Streaming capabilities to get real-time access to audio streams to capture, analyze, and process audio content during active calls. In today's world consumption of live audio and video is prevalent, this content could be in the forms of online meetings, online conferences, customer support, etc. With audio streaming access, developers can now build server applications to capture and analyze audio streams for each of the participants on the call in real-time. Developers can also combine audio streaming with other call automation actions or use their own AI models to analyze audio streams. Use cases include NLP for conversation analysis or providing real-time insights and suggestions to agents while they are in an active interaction with end users.
17
+
Azure Communication Services provides bidirectional audio streaming capabilities, offering developers powerful tools to capture, analyze, and process audio content during active calls. This development paves the way for new possibilities in real-time communication for developers and businesses alike.
18
18
19
-
This public preview supports the ability for developers to get access to real-time audio streams over a WebSocket to analyze the call's audio in mixed and unmixed formats.
19
+
By integrating bidirectional audio streaming with services like Azure OpenAI and other real-time voice APIs, businesses can achieve seamless, low-latency communication. This significantly enhances the development and deployment of conversational AI solutions, allowing for more engaging and efficient interactions.
20
20
21
-
## Common use cases
22
-
Audio streams can be used in many ways. Some examples of how developers may wish to use the audio streams in their applications include:
21
+
With bidirectional streaming, businesses can now elevate their voice solutions to low-latency, human-like, interactive conversational AI agents. Our bidirectional streaming APIs enable developers to stream audio from an ongoing call on Azure Communication Services to their web servers in real-time, and stream audio back into the call. While the initial focus of these features is to help businesses create conversational AI agents, other use cases include Natural Language Processing for conversation analysis or providing real-time insights and suggestions to agents while they are in active interaction with end users.
22
+
23
+
This public preview supports the ability for developers to access real-time audio streams over a WebSocket from Azure Communication Services and stream audio back into the call.
23
24
24
25
### Real-time call assistance
25
26
26
-
**Improved AI powered suggestions** - Use real-time audio streams of active interactions between agents and customers to gauge the intent of the call and how your agents can provide a better experience to their customer through active suggestions using your own AI model to analyze the call.
27
+
-**Leverage conversational AI Solutions:** Develop sophisticated customer support virtual agents that can interact with customers in real-time, providing immediate responses and solutions.
28
+
29
+
-**Personalized customer experiences:** By harnessing real-time data, businesses can offer more personalized and dynamic customer interactions in real-time, leading to increased satisfaction and loyalty.
30
+
31
+
-**Reduce wait times for customers:** Using bidirectional audio streams with Large Language Models (LLMs), you can create virtual agents that serve as the first point of contact for customers, reducing their wait time for a human agent.
27
32
28
33
### Authentication
29
34
30
-
**Biometric authentication** – Use the audio streams to carry out voice authentication, by running the audio from the call through your voice recognition/matching engine/tool.
35
+
-**Biometric authentication** – Use the audio streams to carry out voice authentication, by running the audio from the call through your voice recognition/matching engine/tool.
31
36
32
-
## Sample architecture for subscribing to audio streams from an ongoing call - live agent scenario
37
+
## Sample architecture showing how bidirectional audio streaming can be used for conversational AI agents
33
38
34
-
[](./media/audio-streaming-diagram.png#lightbox)
39
+
[](./media/bidirectional-streaming.png#lightbox)
35
40
36
41
## Supported formats
37
42
38
-
### Mixed format
39
-
Contains mixed audio of all participants on the call. All audio is flattened into one stream.
43
+
### Mixed
44
+
Contains mixed audio of all participants on the call. All audio is flattened into one stream.
40
45
41
46
### Unmixed
42
-
Contains audio per participant per channel, with support for up to four channels for the four most dominant speakers at any point in a call. You'll also get a participantRawID that you can use to determine the speaker.
47
+
Contains audio per participant per channel, with support for up to four channels for the four most dominant speakers at any point in a call. You also get a participantRawID that you can use to determine the speaker.
43
48
44
49
## Additional information
45
-
The table below describes information that will help developers convert the audio packets into audible content that can be used by their applications.
50
+
Developers can use the following information about audio sent from Azure Communication Services to convert the audio packets into audible content for their applications.
46
51
- Framerate: 50 frames per second
47
-
- Packet stream rate: 20ms rate
48
-
- Data packet: 64 Kbytes
49
-
- Audio metric: 16-bit PCM mono at 16000 hz
50
-
- Public string data is a base64 string that should be converted into a byte array to create raw PCM file.\
52
+
- Packet stream rate: 20-ms rate
53
+
- Data packet size: 640 bytes for 16,000 hz and 960 bytes for 24,000 hz
54
+
- Audio metric: 16-bit PCM mono at 16,000 hz and 24,000 hz
55
+
- Public string data is a base64 string that should be converted into a byte array to create raw PCM file.
51
56
52
57
## Billing
53
-
See the [Azure Communication Services pricing page](https://azure.microsoft.com/pricing/details/communication-services/?msockid=3b3359f3828f6cfe30994a9483c76d50) for information on how audio streaming is billed. Prices can be found in the calling category under audio streaming.
58
+
See the [Azure Communication Services pricing page](https://azure.microsoft.com/pricing/details/communication-services/?msockid=3b3359f3828f6cfe30994a9483c76d50) for information on how audio streaming is billed. Prices can be found in the calling category under audio streaming.
54
59
55
60
## Next Steps
56
61
Check out the [audio streaming quickstart](../../how-tos/call-automation/audio-streaming-quickstart.md) to learn more.
- An Azure Communication Services resource. See [Create an Azure Communication Services resource](../../../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp).
17
17
- A new web service application created using the [Call Automation SDK](../../../quickstarts/call-automation/callflows-for-customer-interactions.md).
18
18
- The latest [.NET library](https://dotnet.microsoft.com/download/dotnet-core) for your operating system.
19
-
- A websocket server that can receive media streams.
19
+
- A websocket server that can send and receive media streams.
20
20
21
21
## Set up a websocket server
22
22
Azure Communication Services requires your server application to set up a WebSocket server to stream audio in real-time. WebSocket is a standardized protocol that provides a full-duplex communication channel over a single TCP connection.
23
-
You can optionally use Azure services Azure WebApps that allows you to create an application to receive audio streams over a websocket connection. Follow this [quickstart](https://azure.microsoft.com/blog/introduction-to-websockets-on-windows-azure-web-sites/).
24
23
25
-
## Establish a call
26
-
Establish a call and provide streaming details
24
+
You can review documentation [here](https://azure.microsoft.com/blog/introduction-to-websockets-on-windows-azure-web-sites/) to learn more about WebSockets and how to use them.
25
+
26
+
## Receiving and sending audio streaming data
27
+
There are multiple ways to start receiving audio stream, which can be configured using the `startMediaStreaming` flag in the `mediaStreamingOptions` setup. You can also specify the desired sample rate used for receiving or sending audio data using the `audioFormat` parameter. Currently supported formats are PCM 24K mono and PCM 16K mono, with the default being PCM 16K mono.
28
+
29
+
To enable bidirectional audio streaming, where you're sending audio data into the call, you can enable the `EnableBidirectional` flag. For more details, refer to the [API specifications](https://learn.microsoft.com/rest/api/communication/callautomation/answer-call/answer-call?view=rest-communication-callautomation-2024-06-15-preview&tabs=HTTP#mediastreamingoptions).
30
+
31
+
### Start streaming audio to your webserver at time of answering the call
32
+
Enable automatic audio streaming when the call is established by setting the flag `startMediaStreaming: true`.
33
+
34
+
This setting ensures that audio streaming starts automatically as soon as the call is connected.
When Azure Communication Services receives the URL for your WebSocket server, it establishes a connection to it. Once the connection is successfully made, streaming is initiated.
53
+
54
+
55
+
### Start streaming audio to your webserver while a call is in progress
56
+
To start media streaming during the call, you can use the API. To do so, set the `startMediaStreaming` parameter to `false` (which is the default), and later in the call, you can use the start API to enable media streaming.
When Azure Communication Services receives the URL for your WebSocket server, it creates a connection to it. Once Azure Communication Services successfully connects to your WebSocket server and streaming is started, it will send through the first data packet, which contains metadata about the incoming media packets.
56
-
57
-
The metadata packet will look like this:
58
-
```code
59
-
{
60
-
"kind": <string> // What kind of data this is, e.g. AudioMetadata, AudioData.
61
-
"audioMetadata": {
62
-
"subscriptionId": <string>, // unique identifier for a subscription request
To stop receiving audio streams during a call, you can use the **Stop streaming API**. This allows you to stop the audio streaming at any point in the call. There are two ways that audio streaming can be stopped;
84
+
-**Triggering the Stop streaming API:** Use the API to stop receiving audio streaming data while the call is still active.
85
+
-**Automatic stop on call disconnect:** Audio streaming automatically stops when the call is disconnected.
// Add your code here to process the received audio chunk
111
+
}
112
+
}
113
+
}
114
+
}
115
+
```
116
+
117
+
The first packet you receive contains metadata about the stream, including audio settings such as encoding, sample rate, and other configuration details.
After sending the metadata packet, Azure Communication Services (ACS) will begin streaming audio media to your WebSocket server.
133
+
134
+
```json
135
+
{
136
+
"kind": "AudioData",
137
+
"audioData": {
138
+
"timestamp": "2024-11-15T19:16:12.925Z",
139
+
"participantRawID": "8:acs:3d20e1de-0f28-41c5…",
140
+
"data": "5ADwAOMA6AD0A…",
141
+
"silent": false
142
+
}
143
+
}
144
+
```
145
+
146
+
## Sending audio streaming data to Azure Communication Services
147
+
If bidirectional streaming is enabled using the `EnableBidirectional` flag in the `MediaStreamingOptions`, you can stream audio data back to Azure Communication Services, which plays the audio into the call.
148
+
149
+
Once Azure Communication Services begins streaming audio to your WebSocket server, you can relay the audio to your AI services. After your AI service processes the audio content, you can stream the audio back to the ongoing call in Azure Communication Services.
150
+
151
+
The example demonstrates how another service, such as Azure OpenAI or other voice-based Large Language Models, processes and transmits the audio data back into the call.
You can also control the playback of audio in the call when streaming back to Azure Communication Services, based on your logic or business flow. For example, when voice activity is detected and you want to stop the queued up audio, you can send a stop message via the WebSocket to stop the audio from playing in the call.
0 commit comments