You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# How to use custom entity pattern matching with the C++ Speech SDK
18
+
# How to use custom entity pattern matching with the Speech SDK
18
19
19
20
The Cognitive Services [Speech SDK](speech-sdk.md) has a built-in feature to provide **intent recognition** with **simple language pattern matching**. An intent is something the user wants to do: close a window, mark a checkbox, insert some text, etc.
20
21
21
-
In this guide, you use the Speech SDK to develop a C++ console application that derives intents from speech utterances spoken through your device's microphone. You'll learn how to:
22
+
In this guide, you use the Speech SDK to develop a console application that derives intents from speech utterances spoken through your device's microphone. You'll learn how to:
22
23
23
24
> [!div class="checklist"]
24
25
>
@@ -30,13 +31,13 @@ In this guide, you use the Speech SDK to develop a C++ console application that
30
31
31
32
## When should you use this?
32
33
33
-
Use this sample code if:
34
-
* You are only interested in matching very strictly what the user said. These patterns match more aggressively than LUIS.
35
-
* You do not have access to a [LUIS](../LUIS/index.yml) app, but still want intents. This can be helpful since it is embedded within the SDK.
36
-
* You cannot or do not want to create a LUIS app but you still want some voice-commanding capability.
34
+
Use this sample code if:
37
35
38
-
If you do not have access to a [LUIS](../LUIS/index.yml) app, but still want intents, this can be helpful since it is embedded within the SDK.
36
+
- You are only interested in matching very strictly what the user said. These patterns match more aggressively than LUIS.
37
+
- You do not have access to a [LUIS](../LUIS/index.yml) app, but still want intents. This can be helpful since it is embedded within the SDK.
38
+
- You cannot or do not want to create a LUIS app but you still want some voice-commanding capability.
39
39
40
+
If you do not have access to a [LUIS](../LUIS/index.yml) app, but still want intents, this can be helpful since it is embedded within the SDK.
40
41
41
42
## Prerequisites
42
43
@@ -49,272 +50,12 @@ Be sure you have the following items before you begin this guide:
using namespace Microsoft::CognitiveServices::Speech::Intent;
73
-
74
-
int main()
75
-
{
76
-
std::cout << "Hello World!\n";
77
-
78
-
auto config = SpeechConfig::FromSubscription("YOUR_SUBSCRIPTION_KEY", "YOUR_SUBSCRIPTION_REGION");
79
-
}
80
-
```
81
-
82
-
## Create a Speech configuration
83
-
84
-
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses the key and Azure region for your Cognitive Services prediction resource.
85
-
86
-
* Replace `"YOUR_SUBSCRIPTION_KEY"` with your Cognitive Services prediction key.
87
-
* Replace `"YOUR_SUBSCRIPTION_REGION"` with your Cognitive Services resource region.
88
-
89
-
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](/cpp/cognitive-services/speech/speechconfig).
90
-
91
-
## Initialize an IntentRecognizer
92
-
93
-
Now create an `IntentRecognizer`. Insert this code right below your Speech configuration.
94
-
95
-
```cpp
96
-
auto intentRecognizer = IntentRecognizer::FromConfig(config);
97
-
```
98
-
99
-
## Add some intents
100
-
101
-
You need to associate some patterns with a `PatternMatchingModel` and apply it to the `IntentRecognizer`.
102
-
We will start by creating a `PatternMatchingModel` and adding a few intents to it. A PatternMatchingIntent is a struct so we will just use the in-line syntax.
103
-
104
-
> [!Note]
105
-
> We can add multiple patterns to an `Intent`.
106
-
107
-
```cpp
108
-
auto model = PatternMatchingModel::FromId("myNewModel");
109
-
110
-
model->Intents.push_back({"Take me to floor {floorName}.", "Go to floor {floorName}."} , "ChangeFloors");
111
-
model->Intents.push_back({"{action} the door."}, "OpenCloseDoor");
112
-
```
113
-
114
-
## Add some custom entities
115
-
116
-
To take full advantage of the pattern matcher you can customize your entities. We will make "floorName" a list of the available floors.
Now it is necessary to apply the model to the `IntentRecognizer`. It is possible to use multiple models at once so the API takes a collection of models.
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method asks the Speech service to recognize speech in a single phrase, and stop recognizing speech once the phrase is identified. For simplicity we'll wait on the future returned to complete.
136
-
137
-
Insert this code below your intents:
138
-
139
-
```cpp
140
-
std::cout << "Say something ..." << std::endl;
141
-
auto result = intentRecognizer->RecognizeOnceAsync().get();
142
-
```
143
-
144
-
## Display the recognition results (or errors)
145
-
146
-
When the recognition result is returned by the Speech service, let's just print the result.
147
-
148
-
Insert this code below `auto result = intentRecognizer->RecognizeOnceAsync().get();`:
149
-
150
-
```cpp
151
-
switch (result->Reason)
152
-
{
153
-
case ResultReason::RecognizedSpeech:
154
-
std::cout << "RECOGNIZED: Text = " << result->Text.c_str() << std::endl;
155
-
std::cout << "NO INTENT RECOGNIZED!" << std::endl;
156
-
break;
157
-
case ResultReason::RecognizedIntent:
158
-
std::cout << "RECOGNIZED: Text = " << result->Text.c_str() << std::endl;
std::cout << "CANCELED: Did you update the subscription info?" << std::endl;
288
-
}
289
-
}
290
-
default:
291
-
break;
292
-
}
293
-
}
294
-
```
295
-
## Build and run your app
296
-
297
-
Now you're ready to build your app and test our speech recognition using the Speech service.
298
-
299
-
1.**Compile the code** - From the menu bar of Visual Studio, choose **Build** > **Build Solution**.
300
-
2.**Start your app** - From the menu bar, choose **Debug** > **Start Debugging** or press <kbd>F5</kbd>.
301
-
3.**Start recognition** - It will prompt you to say something. The default language is English. Your speech is sent to the Speech service, transcribed as text, and rendered in the console.
302
-
303
-
For example if you say "Take me to floor 2", this should be the output:
304
-
305
-
```
306
-
Say something ...
307
-
RECOGNIZED: Text = Take me to floor 2.
308
-
Intent Id = ChangeFloors
309
-
Floor name: = 2
310
-
```
311
-
312
-
Another example if you say "Take me to floor 7", this should be the output:
0 commit comments