You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/communication-services/concepts/call-automation/call-automation.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ ms.author: askaur
11
11
---
12
12
# Call Automation Overview
13
13
14
-
Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows, and call recording for voice and Public Switched Telephone Network(PSTN) channels. The SDKs, available in C#, Java, JavaScript, and Python, use an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (like answer, transfer, play audio, start recording, etc.) to steer and control calls based on your business logic.
14
+
Azure Communication Services Call Automation provides developers the ability to build server-based, intelligent call workflows, and call recording for voice and Public Switched Telephone Network(PSTN) channels. The SDKs, available in C#, Java, JavaScript, and Python, use an action-event model to help you build personalized customer interactions. Your communication applications can listen to real-time call events and perform control plane actions (such as answer, transfer, play audio, start recording, and so on) to steer and control calls based on your business logic.
15
15
16
16
## Common use cases
17
17
@@ -146,8 +146,8 @@ The Call Automation events are sent to the web hook callback URI specified when
146
146
|`CallConnected`| The call successfully started (when using `Answer` or `Create` action) or your application successfully connected to an ongoing call (when using `Connect` action). |
147
147
|`CallDisconnected`| Your application has been disconnected from the call. |
148
148
|`ConnectFailed`| Your application failed to connect to a call (for `Connect` call action only). |
149
-
| `CallTransferAccepted` | Transfer action successfully completed and the transferee is connected to the target participant |.
150
-
|`CallTransferFailed`| The transfer action failed. |
149
+
|`CallTransferAccepted`| Transfer action successfully completed and the transferee is connected to the target participant. |
150
+
|`CallTransferFailed`| The transfer action failed. |
151
151
|`AddParticipantSucceeded`| Your application successfully added a participant to the call. |
152
152
|`AddParticipantFailed`| Your application was unable to add a participant to the call (due to an error or the participant didn't accept the invite) |
153
153
|`CancelAddParticipantSucceeded`| Your application canceled an `AddParticipant` request successfully (the participant wasn't added to the call). |
@@ -160,7 +160,7 @@ The Call Automation events are sent to the web hook callback URI specified when
160
160
|`PlayCanceled`| The requested play action has been canceled. |
161
161
|`RecognizeCompleted`| Recognition of user input successfully completed. |
162
162
|`RecognizeCanceled`| The requested `Recognize` action has been canceled. |
163
-
|`RecognizeFailed`| Recognition of user input was unsuccessful. <br/>*For more information about recognize action events, see the how-to guide for [gathering user input](../../how-tos/call-automation/recognize-action.md).*|
163
+
|`RecognizeFailed`| Recognition of user input was unsuccessful. <br/>*For more information about recognize action events, see the how-to guide for [gathering user input](../../how-tos/call-automation/recognize-action.md).*|
164
164
|`RecordingStateChanged`| Status of recording action has changed from active to inactive or vice versa. |
165
165
|`ContinuousDtmfRecognitionToneReceived`|`StartContinuousDtmfRecognition` completed successfully and a DTMF tone was received from the participant. |
166
166
|`ContinuousDtmfRecognitionToneFailed`|`StartContinuousDtmfRecognition` completed but an error occurred while handling a DTMF tone from the participant. |
@@ -176,7 +176,7 @@ To learn how to secure the callback event delivery, see [How to secure webhook e
176
176
177
177
### Operation Callback URI
178
178
179
-
Operation Callback URI is an optional parameter in some mid-call APIs that use events as their async responses. By default, all events are sent to the default callback URI set by `CreateCall` / `AnswerCall` API events when the user establishes a call. Using the Operation Callback URI, the operation sends corresponding events for this individual (one-time only) request to the new URI.
179
+
Operation Callback URI is an optional parameter in some mid-call APIs that use events as their async responses. By default, all events are sent to the default callback URI set by `CreateCall` / `AnswerCall` API events when the user establishes a call. Using the Operation Callback URI, the API sends corresponding events for this individual (one-time only) request to the new URI.
Copy file name to clipboardExpand all lines: articles/communication-services/how-tos/call-automation/includes/recognize-action-quickstart-csharp.md
+24-24Lines changed: 24 additions & 24 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
---
2
2
title: include file
3
-
description: C# recognize action quickstart
3
+
description: C# Recognize action quickstart
4
4
services: azure-communication-services
5
5
author: Kunaal
6
6
ms.service: azure-communication-services
@@ -16,7 +16,7 @@ ms.author: kpunjabi
16
16
- Azure Communication Services resource. See [Create an Azure Communication Services resource](../../../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp). Note the connection string for this resource.
17
17
- Create a new web service application using the [Call Automation SDK](../../../quickstarts/call-automation/callflows-for-customer-interactions.md).
18
18
- The latest [.NET library](https://dotnet.microsoft.com/download/dotnet-core) for your operating system.
19
-
-Obtain the latest [NuGet package](https://www.nuget.org/packages/Azure.Communication.CallAutomation/).
- Create and connect [Azure AI services to your Azure Communication Services resource](../../../concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md).
@@ -27,26 +27,26 @@ ms.author: kpunjabi
27
27
28
28
The following parameters are available to customize the Recognize function:
29
29
30
-
| Parameter | Type|Default (if not specified) | Description | Required or Optional |
| Prompt <br/><br/> *(For details, see [Customize voice prompts to users with Play action](../play-ai-action.md))*| FileSource, TextSource | Not set |This is the message you wish to play before recognizing input. | Optional |
33
-
| InterToneTimeout | TimeSpan | 2 seconds <br/><br/>**Min:** 1 second <br/>**Max:** 60 seconds | Limit in seconds that Azure Communication Services waits for the caller to press another digit (inter-digit timeout). | Optional |
34
-
| InitialSegmentationSilenceTimeoutInSeconds | Integer | 0.5 second | How long recognize action waits for input before considering it a timeout. [Read more here](../../../../../articles/cognitive-services/Speech-Service/how-to-recognize-speech.md). | Optional |
35
-
| RecognizeInputsType | Enum | dtmf | Type of input that is recognized. Options are `dtmf`, `choices`, `speech`, and `speechordtmf`. | Required |
36
-
| InitialSilenceTimeout | TimeSpan | 5 seconds<br/><br/>**Min:** 0 seconds <br/>**Max:** 300 seconds (DTMF) <br/>**Max:** 20 seconds (Choices) <br/>**Max:** 20 seconds (Speech)| Initial silence timeout adjusts how much nonspeech audio is allowed before a phrase before the recognition attempt ends in a "no match" result. [Read more here](../../../../../articles/cognitive-services/Speech-Service/how-to-recognize-speech.md). | Optional |
37
-
| MaxTonesToCollect | Integer | No default<br/><br/>**Min:** 1|Number of digits a developer expects as input from the participant.| Required |
38
-
| StopTones|IEnumeration\<DtmfTone\>| Not set | The digit participants can press to escape out of a batch DTMF event. | Optional |
39
-
| InterruptPrompt | Bool | True | If the participant has the ability to interrupt the playMessage by pressing a digit. | Optional |
40
-
| InterruptCallMediaOperation | Bool | True | If this flag is set it interrupts the current call media operation. For example if any audio is being played it interrupts that operation and initiates recognize. | Optional |
41
-
| OperationContext | String | Not set | String that developers can pass mid action, useful for allowing developers to store context about the events they receive. | Optional |
42
-
| Phrases | String | Not set | List of phrases that associate to the label, if any of these are heard it's considered a successful recognition. | Required |
43
-
| Tone | String | Not set | The tone to recognize if user decides to press a number instead of using speech. | Optional |
44
-
| Label | String | Not set | The key value for recognition. | Required |
45
-
| Language | String | En-us | The language that is used for recognizing speech. | Optional |
46
-
| EndSilenceTimeout| TimeSpan | 0.5 second | The final pause of the speaker used to detect the final result that gets generated as speech. | Optional |
30
+
| Parameter | Type|Default (if not specified) | Description | Required or Optional |
|`Prompt` <br/><br/> *(For details, see [Customize voice prompts to users with Play action](../play-ai-action.md))*| FileSource, TextSource | Not set | The message to play before recognizing input. | Optional |
33
+
|`InterToneTimeout`| TimeSpan | 2 seconds <br/><br/>**Min:** 1 second <br/>**Max:** 60 seconds | Limit in seconds that Azure Communication Services waits for the caller to press another digit (inter-digit timeout). | Optional |
34
+
|`InitialSegmentationSilenceTimeoutInSeconds`| Integer | 0.5 second | How long recognize action waits for input before considering it a timeout. See [How to recognize speech](/azure/ai-services/speech-service/how-to-recognize-speech). | Optional |
35
+
|`RecognizeInputsType`| Enum | dtmf | Type of input that is recognized. Options are `dtmf`, `choices`, `speech`, and `speechordtmf`. | Required |
36
+
|`InitialSilenceTimeout`| TimeSpan | 5 seconds<br/><br/>**Min:** 0 seconds <br/>**Max:** 300 seconds (DTMF) <br/>**Max:** 20 seconds (Choices) <br/>**Max:** 20 seconds (Speech)| Initial silence timeout adjusts how much nonspeech audio is allowed before a phrase before the recognition attempt ends in a "no match" result. See [How to recognize speech](/azure/ai-services/speech-service/how-to-recognize-speech). | Optional |
37
+
|`MaxTonesToCollect`| Integer | No default<br/><br/>**Min:** 1|Number of digits a developer expects as input from the participant.| Required |
38
+
|`StopTones`|IEnumeration\<DtmfTone\>| Not set | The digit participants can press to escape out of a batch DTMF event. | Optional |
39
+
|`InterruptPrompt`| Bool | True | If the participant has the ability to interrupt the playMessage by pressing a digit. | Optional |
40
+
|`InterruptCallMediaOperation`| Bool | True | If this flag is set, it interrupts the current call media operation. For example if any audio is being played it interrupts that operation and initiates recognize. | Optional |
41
+
|`OperationContext`| String | Not set | String that developers can pass mid action, useful for allowing developers to store context about the events they receive. | Optional |
42
+
|`Phrases`| String | Not set | List of phrases that associate to the label. Hearing any of these phrases results in a successful recognition. | Required |
43
+
|`Tone`| String | Not set | The tone to recognize if user decides to press a number instead of using speech. | Optional |
44
+
|`Label`| String | Not set | The key value for recognition. | Required |
45
+
|`Language`| String | En-us | The language that is used for recognizing speech. | Optional |
46
+
|`EndSilenceTimeout`| TimeSpan | 0.5 second | The final pause of the speaker used to detect the final result that gets generated as speech. | Optional |
47
47
48
48
>[!NOTE]
49
-
>In situations where both dtmf and speech are in the recognizeInputsType, the recognize action will act on the first input type received, i.e. if the user presses a keypad number first then the recognize action will consider it a dtmf event and continue listening for dtmf tones. If the user speaks first then the recognize action will consider it a speech recognition and listen for voice input.
49
+
>In situations where both DTMF and speech are in the `recognizeInputsType`, the recognize action acts on the first input type received. For example, if the user presses a keypad number first then the recognize action considers it a DTMF event and continues listening for DTMF tones. If the user speaks first then the recognize action considers it a speech recognition event and listens for voice input.
50
50
51
51
52
52
## Create a new C# application
@@ -59,11 +59,11 @@ dotnet new web -n MyApplication
59
59
60
60
## Install the NuGet package
61
61
62
-
The NuGet package can be obtained from [here](https://www.nuget.org/packages/Azure.Communication.CallAutomation/), if you haven't already done so.
62
+
Get the NuGet package from [NuGet Gallery | Azure.Communication.CallAutomation](https://www.nuget.org/packages/Azure.Communication.CallAutomation/). Follow the instructions to install the package.
63
63
64
64
## Establish a call
65
65
66
-
By this point you should be familiar with starting calls, if you need to learn more about making a call, follow our [quickstart](../../../quickstarts/call-automation/quickstart-make-an-outbound-call.md). You can also use the code snippet provided here to understand how to answer a call.
66
+
By this point you should be familiar with starting calls. For more information about making a call, see [Quickstart: Make and outbound call](../../../quickstarts/call-automation/quickstart-make-an-outbound-call.md). You can also use the code snippet provided here to understand how to answer a call.
67
67
68
68
```csharp
69
69
varcallAutomationClient=newCallAutomationClient("<Azure Communication Services connection string>");
@@ -99,7 +99,7 @@ var recognizeResult = await callAutomationClient.GetCallConnection(callConnectio
99
99
.StartRecognizingAsync(recognizeOptions);
100
100
```
101
101
102
-
For speech-to-text flows, Call Automation Recognize action also supports the use of [custom speech models](/azure/machine-learning/tutorial-train-model.md?view=azureml-api-2). Features like custom speech models can be useful when you're building an application that needs to listen for complex words that the default speech-to-text models may not understand. One example is when you're building an application for the telemedical industry and your virtual agent needs to be able to recognize medical terms. You can learn more in [Create a custom speech project](../../../../ai-services/speech-service/how-to-custom-speech-create-project.md).
102
+
For speech-to-text flows, the Call Automation Recognize action also supports the use of [custom speech models](/azure/machine-learning/tutorial-train-model). Features like custom speech models can be useful when you're building an application that needs to listen for complex words that the default speech-to-text models may not understand. One example is when you're building an application for the telemedical industry and your virtual agent needs to be able to recognize medical terms. You can learn more in [Create a custom speech project](/azure/ai-services/speech-service/speech-services-quotas-and-limits).
103
103
104
104
### Speech-to-Text Choices
105
105
```csharp
@@ -177,7 +177,7 @@ var recognizeResult = await callAutomationClient.GetCallConnection(callConnectio
177
177
178
178
## Receiving recognize event updates
179
179
180
-
Developers can subscribe to the *RecognizeCompleted* and *RecognizeFailed* events on the webhook callback they registered for the call to create business logic in their application for determining next steps when one of the previously mentioned events occurs.
180
+
Developers can subscribe to `RecognizeCompleted` and `RecognizeFailed` events on the registered webhook callback. Use this callback with business logic in your application to determine next steps when one of the events occurs.
181
181
182
182
### Example of how you can deserialize the *RecognizeCompleted* event:
0 commit comments