Skip to content

Commit b9ad227

Browse files
authored
Merge pull request #99791 from erhopf/luis-ui-speech
[CogSvcs] Intent Recognition updates
2 parents f1f5212 + 1a694ea commit b9ad227

File tree

10 files changed

+207
-95
lines changed

10 files changed

+207
-95
lines changed

articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/cpp/windows.md

Lines changed: 41 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,82 +1,102 @@
11
---
22
title: "Quickstart: Recognize speech, intents, and entities, C++ - Speech service"
33
titleSuffix: Azure Cognitive Services
4-
description: TBD
54
services: cognitive-services
65
author: erhopf
76
manager: nitinme
87
ms.service: cognitive-services
98
ms.subservice: speech-service
9+
ms.date: 01/02/2020
1010
ms.topic: include
11-
ms.date: 10/28/2019
1211
ms.author: erhopf
1312
zone_pivot_groups: programming-languages-set-two
1413
---
1514

1615
## Prerequisites
1716

18-
Before you get started, make sure to:
17+
Before you get started:
1918

20-
> [!div class="checklist"]
21-
>
22-
> * [Create an Azure Speech Resource](../../../../get-started.md)
23-
> * [Create a Language Understanding (LUIS) application and get an endpoint key](../../../../quickstarts/create-luis.md)
24-
> * [Setup your development environment](../../../../quickstarts/setup-platform.md?tabs=windows)
25-
> * [Create an empty sample project](../../../../quickstarts/create-project.md?tabs=windows)
19+
* If this is your first C++ project, use this guide to <a href="../quickstarts/create-project.md?tabs=windows" target="_blank">create an empty sample project</a>.
20+
* <a href="../quickstarts/setup-platform.md?tabs=windows" target="_blank">Install the Speech SDK for your development environment</a>.
21+
22+
## Create a LUIS app for intent recognition
23+
24+
[!INCLUDE [Create a LUIS app for intent recognition](../luis-sign-up.md)]
2625

2726
## Open your project in Visual Studio
2827

29-
The first step is to make sure that you have your project open in Visual Studio.
28+
Next, open your project in Visual Studio.
3029

3130
1. Launch Visual Studio 2019.
3231
2. Load your project and open `helloworld.cpp`.
3332

3433
## Start with some boilerplate code
3534

3635
Let's add some code that works as a skeleton for our project. Make note that you've created an async method called `recognizeIntent()`.
36+
3737
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=6-16,73-81)]
3838

3939
## Create a Speech configuration
4040

41-
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoint key and region. Insert this code in the `recognizeIntent()` method.
41+
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses the key and location for your LUIS prediction resource.
4242

43-
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig).
44-
The Speech SDK will default to recognizing using en-us for the language, see [Specify source language for speech to text](../../../../how-to-specify-source-language.md) for information on choosing the source language.
43+
> [!IMPORTANT]
44+
> Your starter key and authoring keys will not work. You must use your prediction key and location that you created earlier. For more information, see [Create a LUIS app for intent recognition](#create-a-luis-app-for-intent-recognition).
4545
46-
> [!NOTE]
47-
> It is important to use the LUIS Endpoint key and not the Starter or Authoring keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
46+
Insert this code in the `recognizeIntent()` method. Make sure you update these values:
47+
48+
* Replace `"YourLanguageUnderstandingSubscriptionKey"` with your LUIS prediction key.
49+
* Replace `"YourLanguageUnderstandingServiceRegion"` with your LUIS location.
50+
51+
>[!TIP]
52+
> If you need help finding these values, see [Create a LUIS app for intent recognition](#create-a-luis-app-for-intent-recognition).
4853
4954
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=25)]
5055

56+
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig).
57+
58+
The Speech SDK will default to recognizing using en-us for the language, see [Specify source language for speech to text](../../../../how-to-specify-source-language.md) for information on choosing the source language.
59+
5160
## Initialize an IntentRecognizer
5261

5362
Now, let's create an `IntentRecognizer`. Insert this code in the `recognizeIntent()` method, right below your Speech configuration.
63+
5464
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=28)]
5565

5666
## Add a LanguageUnderstandingModel and Intents
5767

58-
You now need to associate a `LanguageUnderstandingModel` with the intent recognizer and add the intents you want recognized.
68+
You need to associate a `LanguageUnderstandingModel` with the intent recognizer, and add the intents you want recognized. We're going to use intents from the prebuilt domain for home automation.
69+
70+
Insert this code below your `IntentRecognizer`. Make sure that you replace `"YourLanguageUnderstandingAppId"` with your LUIS app ID.
71+
72+
>[!TIP]
73+
> If you need help finding this value, see [Create a LUIS app for intent recognition](#create-a-luis-app-for-intent-recognition).
74+
5975
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=31-34)]
6076

6177
## Recognize an intent
6278

63-
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
64-
For simplicity we'll wait on the future returned to complete.
79+
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech. For simplicity we'll wait on the future returned to complete.
80+
81+
Insert this code below your model:
6582

66-
Inside the using statement, add this code:
6783
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=44)]
6884

6985
## Display the recognition results (or errors)
7086

7187
When the recognition result is returned by the Speech service, you'll want to do something with it. We're going to keep it simple and print the result to console.
7288

73-
Inside the using statement, below `RecognizeOnceAsync()`, add this code:
89+
Insert this code below `auto result = recognizer->RecognizeOnceAsync().get();`:
90+
7491
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=47-72)]
7592

7693
## Check your code
7794

7895
At this point, your code should look like this:
79-
(We've added some comments to this version)
96+
97+
> [!NOTE]
98+
> We've added some comments to this version.
99+
80100
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=6-81)]
81101

82102
## Build and run your app

articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/csharp/dotnet.md

Lines changed: 39 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1,32 +1,31 @@
11
---
22
title: "Quickstart: Recognize speech, intents, and entities, C# - Speech service"
33
titleSuffix: Azure Cognitive Services
4-
description: TBD
54
services: cognitive-services
65
author: erhopf
76
manager: nitinme
87
ms.service: cognitive-services
98
ms.subservice: speech-service
9+
ms.date: 01/02/2020
1010
ms.topic: include
11-
ms.date: 10/28/2019
1211
ms.author: erhopf
1312
zone_pivot_groups: programming-languages-set-two
1413
---
1514

1615
## Prerequisites
1716

18-
Before you get started, make sure to:
17+
Before you get started:
1918

20-
> [!div class="checklist"]
21-
>
22-
> * [Create an Azure Speech Resource](../../../../get-started.md)
23-
> * [Create a Language Understanding (LUIS) application and get an endpoint key](../../../../quickstarts/create-luis.md)
24-
> * [Setup your development environment](../../../../quickstarts/setup-platform.md?tabs=dotnet)
25-
> * [Create an empty sample project](../../../../quickstarts/create-project.md?tabs=dotnet)
19+
* If this is your first C# project, use this guide to <a href="../quickstarts/create-project.md?tabs=dotnet" target="_blank">create an empty sample project</a>.
20+
* <a href="../quickstarts/setup-platform.md?tabs=dotnet" target="_blank">Install the Speech SDK for your development environment</a>.
21+
22+
## Create a LUIS app for intent recognition
23+
24+
[!INCLUDE [Create a LUIS app for intent recognition](../luis-sign-up.md)]
2625

2726
## Open your project in Visual Studio
2827

29-
The first step is to make sure that you have your project open in Visual Studio.
28+
Next, open your project in Visual Studio.
3029

3130
1. Launch Visual Studio 2019.
3231
2. Load your project and open `Program.cs`.
@@ -38,44 +37,62 @@ Let's add some code that works as a skeleton for our project. Make note that you
3837

3938
## Create a Speech configuration
4039

41-
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoint key and region. Insert this code in the `RecognizeIntentAsync()` method.
40+
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses the key and location for your LUIS prediction resource.
4241

43-
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig?view=azure-dotnet).
44-
The Speech SDK will default to recognizing using en-us for the language, see [Specify source language for speech to text](../../../../how-to-specify-source-language.md) for information on choosing the source language.
42+
> [!IMPORTANT]
43+
> Your starter key and authoring keys will not work. You must use your prediction key and location that you created earlier. For more information, see [Create a LUIS app for intent recognition](#create-a-luis-app-for-intent-recognition).
4544
46-
> [!NOTE]
47-
> It is important to use the LUIS Endpoint key and not the Starter or Authoring keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
45+
Insert this code in the `RecognizeIntentAsync()` method. Make sure you update these values:
46+
47+
* Replace `"YourLanguageUnderstandingSubscriptionKey"` with your LUIS prediction key.
48+
* Replace `"YourLanguageUnderstandingServiceRegion"` with your LUIS location.
49+
50+
>[!TIP]
51+
> If you need help finding these values, see [Create a LUIS app for intent recognition](#create-a-luis-app-for-intent-recognition).
4852
4953
[!code-csharp[](~/samples-cognitive-services-speech-sdk/quickstart/csharp/dotnet/intent-recognition/helloworld/Program.cs?range=26)]
5054

55+
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig?view=azure-dotnet).
56+
57+
The Speech SDK will default to recognizing using en-us for the language, see [Specify source language for speech to text](../../../../how-to-specify-source-language.md) for information on choosing the source language.
58+
5159
## Initialize an IntentRecognizer
5260

53-
Now, let's create an `IntentRecognizer`. This object is created inside of a using statement to ensure the proper release of unmanaged resources. Insert this code in the `RecognizeIntentAsync()` method, right below your Speech configuration.
61+
Now, let's create an `IntentRecognizer`. This object is created inside of a using statement to ensure the proper release of unmanaged resources. Insert this code in the `RecognizeIntentAsync()` method, right below your Speech configuration.
62+
5463
[!code-csharp[](~/samples-cognitive-services-speech-sdk/quickstart/csharp/dotnet/intent-recognition/helloworld/Program.cs?range=28-30,76)]
5564

56-
## Add a LanguageUnderstandingModel and Intents
65+
## Add a LanguageUnderstandingModel and intents
66+
67+
You need to associate a `LanguageUnderstandingModel` with the intent recognizer, and add the intents that you want recognized. We're going to use intents from the prebuilt domain for home automation. Insert this code in the using statement from the previous section. Make sure that you replace `"YourLanguageUnderstandingAppId"` with your LUIS app ID.
68+
69+
>[!TIP]
70+
> If you need help finding this value, see [Create a LUIS app for intent recognition](#create-a-luis-app-for-intent-recognition).
5771
58-
You now need to associate a `LanguageUnderstandingModel` with the intent recognizer and add the intents you want recognized.
5972
[!code-csharp[](~/samples-cognitive-services-speech-sdk/quickstart/csharp/dotnet/intent-recognition/helloworld/Program.cs?range=31-35)]
6073

6174
## Recognize an intent
6275

6376
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
6477

65-
Inside the using statement, add this code:
78+
Inside the using statement, add this code below your model:
6679
[!code-csharp[](~/samples-cognitive-services-speech-sdk/quickstart/csharp/dotnet/intent-recognition/helloworld/Program.cs?range=46)]
6780

68-
## Display the recognition results (or errors)
81+
## Display recognition results (or errors)
6982

70-
When the recognition result is returned by the Speech service, you'll want to do something with it. We're going to keep it simple and print the result to console.
83+
When the recognition result is returned by the Speech service, you'll want to do something with it. We're going to keep it simple and print the results to console.
7184

7285
Inside the using statement, below `RecognizeOnceAsync()`, add this code:
86+
7387
[!code-csharp[](~/samples-cognitive-services-speech-sdk/quickstart/csharp/dotnet/intent-recognition/helloworld/Program.cs?range=48-75)]
7488

7589
## Check your code
7690

7791
At this point, your code should look like this:
78-
(We've added some comments to this version)
92+
93+
> [!NOTE]
94+
> We've added some comments to this version.
95+
7996
[!code-csharp[](~/samples-cognitive-services-speech-sdk/quickstart/csharp/dotnet/intent-recognition/helloworld/Program.cs?range=5-86)]
8097

8198
## Build and run your app

articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/header.md

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,12 +8,15 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: speech-service
1010
ms.topic: include
11-
ms.date: 10/28/2019
11+
ms.date: 1/02/2019
1212
ms.author: erhopf
1313
zone_pivot_groups: programming-languages-set-two
1414
---
1515

16-
In this quickstart you will use the [Speech SDK](~/articles/cognitive-services/speech-service/speech-sdk.md) to interactively recognize speech intents from audio data captured using a microphone. After satisfying a few prerequisites, recognizing speech from a microphone only takes four steps:
16+
In this quickstart, you'll use the [Speech SDK](~/articles/cognitive-services/speech-service/speech-sdk.md) and the Language Understanding (LUIS) service to recognize intents from audio data captured from a microphone. Specifically, you'll use the Speech SDK to capture speech, and a prebuilt domain from LUIS to identify intents for home automation, like turning on and off a light.
17+
18+
After satisfying a few prerequisites, recognizing speech and identifying intents from a microphone only takes a few steps:
19+
1720
> [!div class="checklist"]
1821
>
1922
> * Create a ````SpeechConfig```` object from your subscription key and region.

0 commit comments

Comments
 (0)