You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/tutorial-voice-enable-your-bot-speech-sdk.md
+16-16Lines changed: 16 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,7 +41,7 @@ Here's what this tutorial covers:
41
41
> * Create new Azure resources
42
42
> * Build, test, and deploy the Echo Bot sample to an Azure App Service
43
43
> * Register your bot with Direct Line Speech channel
44
-
> * Build and run the Direct Line Speech Client to interact with your Echo Bot
44
+
> * Build and run the Windows Voice Assistant Client to interact with your Echo Bot
45
45
> * Add custom keyword activation
46
46
> * Learn to change the language of the recognized and spoken speech
47
47
@@ -272,21 +272,21 @@ Now it's time to register your bot with the Direct Line Speech channel. This cha
272
272
> [!TIP]
273
273
> If you'd like to learn more, see [Connect a bot to Direct Line Speech](https://docs.microsoft.com/azure/bot-service/bot-service-channel-connect-directlinespeech?view=azure-bot-service-4.0). This page includes additional information and known issues.
274
274
275
-
## Build the Direct Line Speech Client
275
+
## Build the Windows Voice Assistant Client
276
276
277
-
In this step, you're going to build the Direct Line Speech Client. The client is a Windows Presentation Foundation (WPF) app in C# that uses the [Speech SDK](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-sdk) to manage communication with your bot using the Direct Line Speech channel. Use it to interact with and test your bot before writing a custom client app.
277
+
In this step, you're going to build the Windows Voice Assistant Client. The client is a Windows Presentation Foundation (WPF) app in C# that uses the [Speech SDK](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-sdk) to manage communication with your bot using the Direct Line Speech channel. Use it to interact with and test your bot before writing a custom client app.
278
278
279
-
The Direct Line Speech Client has a simple UI that allows you to configure the connection to your bot, view the text conversation, view Bot-Framework activities in JSON format, and display adaptive cards. It also supports the use of custom keywords. You'll use this client to speak with your bot and receive a voice response.
279
+
The Windows Voice Assistant Client has a simple UI that allows you to configure the connection to your bot, view the text conversation, view Bot-Framework activities in JSON format, and display adaptive cards. It also supports the use of custom keywords. You'll use this client to speak with your bot and receive a voice response.
280
280
281
281
Before we move on, make sure that your microphone and speakers are enabled and working.
282
282
283
-
1. Navigate to the GitHub repository for the [Direct Line Speech Client](https://github.com/Azure-Samples/Cognitive-Services-Direct-Line-Speech-Client/blob/master/README.md).
283
+
1. Navigate to the GitHub repository for the [Windows Voice Assistant Client](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/samples/clients/csharp-wpf/README.md).
284
284
2. Follow the instructions provided to clone the repository, build the project, configure the client, and launch the client.
285
285
3. Click **Reconnect** and make sure you see the message **Press the mic button, or type to start talking to your bot**.
286
286
4. Let's test it out. Click the microphone button, and speak a few words in English. The recognized text will appear as you speak. When you're done speaking, the bot will reply in its own voice, saying "echo" followed by the recognized words.
287
287
5. You can also use text to communicate with the bot. Just type in the text at the bottom bar.
288
288
289
-
### Troubleshooting errors in Direct Line Speech Client
289
+
### Troubleshooting errors in Windows Voice Assistant Client
290
290
291
291
If you get an error message in your main app window, use this table to identify and troubleshoot the error:
292
292
@@ -301,7 +301,7 @@ If your issue isn't addressed in the table, see [Voice assistants: Frequently as
301
301
302
302
### View bot activities
303
303
304
-
Every bot sends and receives **Activity** messages. In the **Activity Log** window of Direct Line Speech Client, you'll see timestamped logs with each activity that the client has received from the bot. You can also see the activities that the client sent to the bot using the [`DialogServiceConnector.SendActivityAsync`](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector.sendactivityasync) method. When you select a log item, it will show the details of the associated activity as JSON.
304
+
Every bot sends and receives **Activity** messages. In the **Activity Log** window of Windows Voice Assistant Client, you'll see timestamped logs with each activity that the client has received from the bot. You can also see the activities that the client sent to the bot using the [`DialogServiceConnector.SendActivityAsync`](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector.sendactivityasync) method. When you select a log item, it will show the details of the associated activity as JSON.
305
305
306
306
Here's a sample json of an Activity the client received:
307
307
@@ -343,7 +343,7 @@ To learn more about what's returned in the JSON output, see [fields in the Activ
343
343
344
344
### View client source code for calls to the Speech SDK
345
345
346
-
The Direct Line Speech Client uses the NuGet package [Microsoft.CognitiveServices.Speech](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech/), which contains the Speech SDK. A good place to start reviewing the sample code is the method InitSpeechConnector() in file [`DLSpeechClient\MainWindow.xaml.cs`](https://github.com/Azure-Samples/Cognitive-Services-Direct-Line-Speech-Client/blob/master/DLSpeechClient/MainWindow.xaml.cs), which creates these two Speech SDK objects:
346
+
The Windows Voice Assistant Client uses the NuGet package [Microsoft.CognitiveServices.Speech](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech/), which contains the Speech SDK. A good place to start reviewing the sample code is the method InitSpeechConnector() in file [`VoiceAssistantClient\MainWindow.xaml.cs`](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/samples/clients/csharp-wpf/VoiceAssistantClient/MainWindow.xaml.cs), which creates these two Speech SDK objects:
-[`DialogServiceConnector`](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector.-ctor) - To manage the channel connection and client subscription events for handling recognized speech and bot responses.
349
349
@@ -356,11 +356,11 @@ The Speech SDK supports custom keyword activation. Similar to "Hey Cortana" for
356
356
357
357
Keyword detection is done on the client app. If using a keyword, audio is only streamed to the Direct Line Speech channel if the keyword is detected. The Direct Line Speech channel includes a component called *keyword verification (KWV)*, which does more complex processing in the cloud to verify that the keyword you've chosen is at the start of the audio stream. If key word verification succeeds, then the channel will communicate with the bot.
358
358
359
-
Follow these steps to create a keyword model, configure the Direct Line Speech Client to use this model, and finally, test it with your bot.
359
+
Follow these steps to create a keyword model, configure the Windows Voice Assistant Client to use this model, and finally, test it with your bot.
360
360
361
361
1. Follow these instructions to [create a custom keyword by using the Speech service](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-devices-sdk-create-kws).
362
362
2. Unzip the model file that you downloaded in the previous step. It should be named for your keyword. You're looking for a file named `kws.table`.
363
-
3. In the Direct Line Speech client, locate the **Settings** menu (look for the gear icon in the top right). Locate **Model file path** and enter the full path name for the `kws.table` file from step 2.
363
+
3. In the Windows Voice Assistant Client, locate the **Settings** menu (look for the gear icon in the top right). Locate **Model file path** and enter the full path name for the `kws.table` file from step 2.
364
364
4. Make sure to check the box labeled **Enabled**. You should see this message next to the check box: "Will listen for the keyword upon next connection". If you've provided the wrong file or an invalid path, you should see an error message.
365
365
5. Enter your speech **subscription key**, **subscription key region**, and then click **OK** to close the **Settings** menu.
366
366
6. Click **Reconnect**. You should see a message that reads: "New conversation started - type, press the microphone button, or say the keyword". The app is now continuously listening.
@@ -374,10 +374,10 @@ Follow these steps to create a keyword model, configure the Direct Line Speech C
374
374
375
375
### View the source code that enables keyword
376
376
377
-
In the Direct Line Speech Client source code, take a look at these files to review the code that's used to enable keyword detection:
377
+
In the Windows Voice Assistant Client source code, take a look at these files to review the code that's used to enable keyword detection:
378
378
379
-
1.[`DLSpeechClient\Models.cs`](https://github.com/Azure-Samples/Cognitive-Services-Direct-Line-Speech-Client/blob/master/DLSpeechClient/Models.cs) includes a call to the Speech SDK method [`KeywordRecognitionModel.fromFile()`](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/keywordrecognitionmodel?view=azure-node-latest#fromfile-string-), which is used to instantiate the model from a local file on disk.
380
-
1.[`DLSpeechClient\MainWindow.xaml.cs`](https://github.com/Azure-Samples/Cognitive-Services-Direct-Line-Speech-Client/blob/master/DLSpeechClient/MainWindow.xaml.cs) includes a call to Speech SDK method [`DialogServiceConnector.StartKeywordRecognitionAsync()`](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector.startkeywordrecognitionasync), which activates continuous keyword detection.
379
+
1.[`VoiceAssistantClient\Models.cs`](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/samples/clients/csharp-wpf/VoiceAssistantClient/Models.cs) includes a call to the Speech SDK method [`KeywordRecognitionModel.fromFile()`](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/keywordrecognitionmodel?view=azure-node-latest#fromfile-string-), which is used to instantiate the model from a local file on disk.
380
+
1.[`VoiceAssistantClient\MainWindow.xaml.cs`](https://github.com/Azure-Samples/Cognitive-Services-Voice-Assistant/blob/master/samples/clients/csharp-wpf/VoiceAssistantClient/MainWindow.xaml.cs) includes a call to Speech SDK method [`DialogServiceConnector.StartKeywordRecognitionAsync()`](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.dialog.dialogserviceconnector.startkeywordrecognitionasync), which activates continuous keyword detection.
381
381
382
382
## (Optional) Change the language and bot voice
383
383
@@ -387,7 +387,7 @@ The bot that you've created will listen for and respond in English, with a defau
387
387
388
388
You can chose from any one of the languages mentioned in the [speech-to-text](language-support.md#speech-to-text) table. In the example below, we will change the language to German.
389
389
390
-
1. Open the Direct Line Speech Client app, click on the settings button (upper-right gear icon), and enter `de-de` in the Language field (this is the Locale value mentioned in the [speech-to-text](language-support.md#speech-to-text) table). This sets the spoken language to be recognized, overriding the default `en-us`. This also instructs Direct Line Speech channel to use a default German voice for the Bot reply.
390
+
1. Open the Windows Voice Assistant Client app, click on the settings button (upper-right gear icon), and enter `de-de` in the Language field (this is the Locale value mentioned in the [speech-to-text](language-support.md#speech-to-text) table). This sets the spoken language to be recognized, overriding the default `en-us`. This also instructs Direct Line Speech channel to use a default German voice for the Bot reply.
391
391
2. Close the settings page, and click on the Reconnect button to establish a new connection to your echo bot.
392
392
3. Click on the microphone button, and say a phrase in German. You will see the recognized text and the echo bot replying with the default German voice.
393
393
@@ -420,8 +420,8 @@ Now that you've made the necessary change to the bot, the next step is to republ
0 commit comments