You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Azure Communication Services requires your server application to set up a WebSocket server to stream audio in real-time. WebSocket is a standardized protocol that provides a full-duplex communication channel over a single TCP connection.
23
23
You can optionally use Azure services Azure WebApps that allows you to create an application to receive audio streams over a websocket connection. Follow this [quickstart](https://azure.microsoft.com/blog/introduction-to-websockets-on-windows-azure-web-sites/).
24
24
25
-
## Establish a call
26
-
Establish a call and provide streaming details
25
+
## Receiving and Sending audio streaming data
26
+
There are multiple ways to start receiving audio stream, which can be configured using the `startMediaStreaming` flag in the `mediaStreamingOptions` setup. You can also specify the desired sample rate used for recieving or sending audio data using the `audioFormat` parameter. Currently supported formats are PCM 24K mono and PCM 16K mono, with the default being PCM 16K mono.
27
+
28
+
To enable bidirectional audio streaming, where you're sending audio data into the call, you can enable the `EnableBidirectional` flag.
29
+
30
+
### Start streaming audio to your webserver at time of answering the call
31
+
Enable automatic audio streaming when the call is established by setting the flag `startMediaStreaming: true`.
32
+
33
+
This ensures that audio streaming starts automatically as soon as the call is connected.
When Azure Communication Services receives the URL for your WebSocket server, it establishes a connection to it. Once the connection is successfully made, streaming is initiated.
52
+
53
+
54
+
### Start streaming audio to your webserver while a call is in progress
55
+
To start media streaming during the call, you can use the API. To do so, set the `startMediaStreaming` parameter to `false` (which is the default), and later in the call, you can use the start API to enable media streaming.
When Azure Communication Services receives the URL for your WebSocket server, it creates a connection to it. Once Azure Communication Services successfully connects to your WebSocket server and streaming is started, it will send through the first data packet, which contains metadata about the incoming media packets.
56
-
57
-
The metadata packet will look like this:
58
-
```code
59
-
{
60
-
"kind": <string> // What kind of data this is, e.g. AudioMetadata, AudioData.
61
-
"audioMetadata": {
62
-
"subscriptionId": <string>, // unique identifier for a subscription request
To stop recieving audio streams during a call, you can use the **Stop streaming API**. This allows you to stop the audio streaming at any point in the call. There are two ways that audio streaming can be stopped;
83
+
1.**Triggering the Stop streaming API:** Use the API to stop receiving audio streaming data while the call is still active.
84
+
2.**Automatic stop on call disconnect:** Audio streaming will automatically stop when the call is disconnected.
The first packet you receive will contain metadata about the streaming, including audio settings such as encoding, sample rate, and other configuration details.
After sending the metadata packet, Azure Communication Services (ACS) will begin streaming audio media to your WebSocket server.
140
+
141
+
```json
142
+
{
143
+
"kind": "AudioData",
144
+
"audioData": {
145
+
"timestamp": "2024-11-15T19:16:12.925Z",
146
+
"participantRawID": "8:acs:3d20e1de-0f28-41c5…",
147
+
"data": "5ADwAOMA6AD0A…",
148
+
"silent": false
149
+
}
150
+
}
151
+
```
152
+
153
+
## Sending audio streaming data to Azure Communication Services
154
+
If bidirectional streaming is enabled using the `EnableBidirectional` flag in the `MediaStreamingOptions`, you can stream audio data back to Azure Communication Services, which will play the audio into the call.
155
+
156
+
Once Azure Communication Services begins streaming audio to your WebSocket server, you can relay the audio to the LLM and vice versa. After the LLM processes the audio content, it streams the response back to your service, which you can then send into the Azure Communication Services call.
157
+
158
+
The example below demonstrates how to transmit the audio data back into the call after it has been processed by another service, for instance Azure OpenAI or other such voice based Large Language Models.
You can also control the playback of audio in the call when streaming back to Azure Communication Services, based on your logic or business flow. For example, when voice activity is detected and you want to stop the queued up audio, you can send a stop message via the WebSocket to stop the audio from playing in the call.
0 commit comments