You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/cpp/windows.md
+41-21Lines changed: 41 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,82 +1,102 @@
1
1
---
2
2
title: "Quickstart: Recognize speech, intents, and entities, C++ - Speech service"
3
3
titleSuffix: Azure Cognitive Services
4
-
description: TBD
5
4
services: cognitive-services
6
5
author: erhopf
7
6
manager: nitinme
8
7
ms.service: cognitive-services
9
8
ms.subservice: speech-service
9
+
ms.date: 01/02/2020
10
10
ms.topic: include
11
-
ms.date: 10/28/2019
12
11
ms.author: erhopf
13
12
zone_pivot_groups: programming-languages-set-two
14
13
---
15
14
16
15
## Prerequisites
17
16
18
-
Before you get started, make sure to:
17
+
Before you get started:
19
18
20
-
> [!div class="checklist"]
21
-
>
22
-
> *[Create an Azure Speech Resource](../../../../get-started.md)
23
-
> *[Create a Language Understanding (LUIS) application and get an endpoint key](../../../../quickstarts/create-luis.md)
24
-
> *[Setup your development environment](../../../../quickstarts/setup-platform.md?tabs=windows)
25
-
> *[Create an empty sample project](../../../../quickstarts/create-project.md?tabs=windows)
19
+
* If this is your first C++ project, use this guide to <ahref="../quickstarts/create-project.md?tabs=windows"target="_blank">create an empty sample project</a>.
20
+
* <ahref="../quickstarts/setup-platform.md?tabs=windows"target="_blank">Install the Speech SDK for your development environment</a>.
21
+
22
+
## Create a LUIS app for intent recognition
23
+
24
+
[!INCLUDE [Create a LUIS app for intent recognition](../luis-sign-up.md)]
26
25
27
26
## Open your project in Visual Studio
28
27
29
-
The first step is to make sure that you have your project open in Visual Studio.
28
+
Next, open your project in Visual Studio.
30
29
31
30
1. Launch Visual Studio 2019.
32
31
2. Load your project and open `helloworld.cpp`.
33
32
34
33
## Start with some boilerplate code
35
34
36
35
Let's add some code that works as a skeleton for our project. Make note that you've created an async method called `recognizeIntent()`.
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoint key and region. Insert this code in the `recognizeIntent()` method.
41
+
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses the key and location for your LUIS prediction resource.
42
42
43
-
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig).
44
-
The Speech SDK will default to recognizing using en-us for the language, see [Specify source language for speech to text](../../../../how-to-specify-source-language.md)for information on choosing the source language.
43
+
> [!IMPORTANT]
44
+
> Your starter key and authoring keys will not work. You must use your prediction key and location that you created earlier. For more information, see [Create a LUIS app for intent recognition](#create-a-luis-app-for-intent-recognition).
45
45
46
-
> [!NOTE]
47
-
> It is important to use the LUIS Endpoint key and not the Starter or Authoring keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
46
+
Insert this code in the `recognizeIntent()` method. Make sure you update these values:
47
+
48
+
* Replace `"YourLanguageUnderstandingSubscriptionKey"` with your LUIS prediction key.
49
+
* Replace `"YourLanguageUnderstandingServiceRegion"` with your LUIS location.
50
+
51
+
>[!TIP]
52
+
> If you need help finding these values, see [Create a LUIS app for intent recognition](#create-a-luis-app-for-intent-recognition).
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig).
57
+
58
+
The Speech SDK will default to recognizing using en-us for the language, see [Specify source language for speech to text](../../../../how-to-specify-source-language.md) for information on choosing the source language.
59
+
51
60
## Initialize an IntentRecognizer
52
61
53
62
Now, let's create an `IntentRecognizer`. Insert this code in the `recognizeIntent()` method, right below your Speech configuration.
You now need to associate a `LanguageUnderstandingModel` with the intent recognizer and add the intents you want recognized.
68
+
You need to associate a `LanguageUnderstandingModel` with the intent recognizer, and add the intents you want recognized. We're going to use intents from the prebuilt domain for home automation.
69
+
70
+
Insert this code below your `IntentRecognizer`. Make sure that you replace `"YourLanguageUnderstandingAppId"` with your LUIS app ID.
71
+
72
+
>[!TIP]
73
+
> If you need help finding this value, see [Create a LUIS app for intent recognition](#create-a-luis-app-for-intent-recognition).
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
64
-
For simplicity we'll wait on the future returned to complete.
79
+
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech. For simplicity we'll wait on the future returned to complete.
When the recognition result is returned by the Speech service, you'll want to do something with it. We're going to keep it simple and print the result to console.
72
88
73
-
Inside the using statement, below `RecognizeOnceAsync()`, add this code:
89
+
Insert this code below `auto result = recognizer->RecognizeOnceAsync().get();`:
> *[Create an Azure Speech Resource](../../../../get-started.md)
23
-
> *[Create a Language Understanding (LUIS) application and get an endpoint key](../../../../quickstarts/create-luis.md)
24
-
> *[Setup your development environment](../../../../quickstarts/setup-platform.md?tabs=dotnet)
25
-
> *[Create an empty sample project](../../../../quickstarts/create-project.md?tabs=dotnet)
19
+
* If this is your first C# project, use this guide to <ahref="../quickstarts/create-project.md?tabs=dotnet"target="_blank">create an empty sample project</a>.
20
+
* <ahref="../quickstarts/setup-platform.md?tabs=dotnet"target="_blank">Install the Speech SDK for your development environment</a>.
21
+
22
+
## Create a LUIS app for intent recognition
23
+
24
+
[!INCLUDE [Create a LUIS app for intent recognition](../luis-sign-up.md)]
26
25
27
26
## Open your project in Visual Studio
28
27
29
-
The first step is to make sure that you have your project open in Visual Studio.
28
+
Next, open your project in Visual Studio.
30
29
31
30
1. Launch Visual Studio 2019.
32
31
2. Load your project and open `Program.cs`.
@@ -38,44 +37,62 @@ Let's add some code that works as a skeleton for our project. Make note that you
38
37
39
38
## Create a Speech configuration
40
39
41
-
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoint key and region. Insert this code in the `RecognizeIntentAsync()` method.
40
+
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses the key and location for your LUIS prediction resource.
42
41
43
-
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig?view=azure-dotnet).
44
-
The Speech SDK will default to recognizing using en-us for the language, see [Specify source language for speech to text](../../../../how-to-specify-source-language.md)for information on choosing the source language.
42
+
> [!IMPORTANT]
43
+
> Your starter key and authoring keys will not work. You must use your prediction key and location that you created earlier. For more information, see [Create a LUIS app for intent recognition](#create-a-luis-app-for-intent-recognition).
45
44
46
-
> [!NOTE]
47
-
> It is important to use the LUIS Endpoint key and not the Starter or Authoring keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
45
+
Insert this code in the `RecognizeIntentAsync()` method. Make sure you update these values:
46
+
47
+
* Replace `"YourLanguageUnderstandingSubscriptionKey"` with your LUIS prediction key.
48
+
* Replace `"YourLanguageUnderstandingServiceRegion"` with your LUIS location.
49
+
50
+
>[!TIP]
51
+
> If you need help finding these values, see [Create a LUIS app for intent recognition](#create-a-luis-app-for-intent-recognition).
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig?view=azure-dotnet).
56
+
57
+
The Speech SDK will default to recognizing using en-us for the language, see [Specify source language for speech to text](../../../../how-to-specify-source-language.md) for information on choosing the source language.
58
+
51
59
## Initialize an IntentRecognizer
52
60
53
-
Now, let's create an `IntentRecognizer`. This object is created inside of a using statement to ensure the proper release of unmanaged resources. Insert this code in the `RecognizeIntentAsync()` method, right below your Speech configuration.
61
+
Now, let's create an `IntentRecognizer`. This object is created inside of a using statement to ensure the proper release of unmanaged resources. Insert this code in the `RecognizeIntentAsync()` method, right below your Speech configuration.
You need to associate a `LanguageUnderstandingModel` with the intent recognizer, and add the intents that you want recognized. We're going to use intents from the prebuilt domain for home automation. Insert this code in the using statement from the previous section. Make sure that you replace `"YourLanguageUnderstandingAppId"` with your LUIS app ID.
68
+
69
+
>[!TIP]
70
+
> If you need help finding this value, see [Create a LUIS app for intent recognition](#create-a-luis-app-for-intent-recognition).
57
71
58
-
You now need to associate a `LanguageUnderstandingModel` with the intent recognizer and add the intents you want recognized.
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
64
77
65
-
Inside the using statement, add this code:
78
+
Inside the using statement, add this code below your model:
When the recognition result is returned by the Speech service, you'll want to do something with it. We're going to keep it simple and print the result to console.
83
+
When the recognition result is returned by the Speech service, you'll want to do something with it. We're going to keep it simple and print the results to console.
71
84
72
85
Inside the using statement, below `RecognizeOnceAsync()`, add this code:
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/header.md
+5-2Lines changed: 5 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,12 +8,15 @@ manager: nitinme
8
8
ms.service: cognitive-services
9
9
ms.subservice: speech-service
10
10
ms.topic: include
11
-
ms.date: 10/28/2019
11
+
ms.date: 1/02/2019
12
12
ms.author: erhopf
13
13
zone_pivot_groups: programming-languages-set-two
14
14
---
15
15
16
-
In this quickstart you will use the [Speech SDK](~/articles/cognitive-services/speech-service/speech-sdk.md) to interactively recognize speech intents from audio data captured using a microphone. After satisfying a few prerequisites, recognizing speech from a microphone only takes four steps:
16
+
In this quickstart, you'll use the [Speech SDK](~/articles/cognitive-services/speech-service/speech-sdk.md) and the Language Understanding (LUIS) service to recognize intents from audio data captured from a microphone. Specifically, you'll use the Speech SDK to capture speech, and a prebuilt domain from LUIS to identify intents for home automation, like turning on and off a light.
17
+
18
+
After satisfying a few prerequisites, recognizing speech and identifying intents from a microphone only takes a few steps:
19
+
17
20
> [!div class="checklist"]
18
21
>
19
22
> * Create a ````SpeechConfig```` object from your subscription key and region.
0 commit comments