Skip to content

Commit 7290d77

Browse files
authored
Merge pull request #183232 from BrianMouncer/BrianMouncer-pr/1.20.0_updates
Brian mouncer pr/1.20.0 updates
2 parents 6bfa585 + c9c37d5 commit 7290d77

File tree

14 files changed

+1441
-629
lines changed

14 files changed

+1441
-629
lines changed
Lines changed: 19 additions & 278 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: How to use custom entity pattern matching with the C++ Speech SDK
2+
title: How to use custom entity pattern matching with the Speech SDK
33
titleSuffix: Azure Cognitive Services
44
description: In this guide, you learn how to recognize intents and custom entities from simple patterns.
55
services: cognitive-services
@@ -10,15 +10,16 @@ ms.subservice: speech-service
1010
ms.topic: conceptual
1111
ms.date: 11/15/2021
1212
ms.author: chschrae
13-
ms.devlang: cpp
14-
ms.custom: devx-track-cpp
13+
ms.devlang: cpp, csharp
14+
zone_pivot_groups: programming-languages-set-nine
15+
ms.custom: devx-track-cpp, devx-track-csharp, mode-other
1516
---
1617

17-
# How to use custom entity pattern matching with the C++ Speech SDK
18+
# How to use custom entity pattern matching with the Speech SDK
1819

1920
The Cognitive Services [Speech SDK](speech-sdk.md) has a built-in feature to provide **intent recognition** with **simple language pattern matching**. An intent is something the user wants to do: close a window, mark a checkbox, insert some text, etc.
2021

21-
In this guide, you use the Speech SDK to develop a C++ console application that derives intents from speech utterances spoken through your device's microphone. You'll learn how to:
22+
In this guide, you use the Speech SDK to develop a console application that derives intents from speech utterances spoken through your device's microphone. You'll learn how to:
2223

2324
> [!div class="checklist"]
2425
>
@@ -30,13 +31,13 @@ In this guide, you use the Speech SDK to develop a C++ console application that
3031
3132
## When should you use this?
3233

33-
Use this sample code if:
34-
* You are only interested in matching very strictly what the user said. These patterns match more aggressively than LUIS.
35-
* You do not have access to a [LUIS](../LUIS/index.yml) app, but still want intents. This can be helpful since it is embedded within the SDK.
36-
* You cannot or do not want to create a LUIS app but you still want some voice-commanding capability.
34+
Use this sample code if:
3735

38-
If you do not have access to a [LUIS](../LUIS/index.yml) app, but still want intents, this can be helpful since it is embedded within the SDK.
36+
- You are only interested in matching very strictly what the user said. These patterns match more aggressively than LUIS.
37+
- You do not have access to a [LUIS](../LUIS/index.yml) app, but still want intents. This can be helpful since it is embedded within the SDK.
38+
- You cannot or do not want to create a LUIS app but you still want some voice-commanding capability.
3939

40+
If you do not have access to a [LUIS](../LUIS/index.yml) app, but still want intents, this can be helpful since it is embedded within the SDK.
4041

4142
## Prerequisites
4243

@@ -49,272 +50,12 @@ Be sure you have the following items before you begin this guide:
4950

5051
[!INCLUDE [Pattern Matching Overview](includes/pattern-matching-overview.md)]
5152

52-
## Create a speech project in Visual Studio
53-
54-
[!INCLUDE [Create project](../../../includes/cognitive-services-speech-service-quickstart-cpp-create-proj.md)]
55-
56-
## Open your project in Visual Studio
57-
58-
Next, open your project in Visual Studio.
59-
60-
1. Launch Visual Studio 2019.
61-
2. Load your project and open `helloworld.cpp`.
62-
63-
## Start with some boilerplate code
64-
65-
Let's add some code that works as a skeleton for our project.
66-
67-
```cpp
68-
#include <iostream>
69-
#include <speechapi_cxx.h>
70-
71-
using namespace Microsoft::CognitiveServices::Speech;
72-
using namespace Microsoft::CognitiveServices::Speech::Intent;
73-
74-
int main()
75-
{
76-
std::cout << "Hello World!\n";
77-
78-
auto config = SpeechConfig::FromSubscription("YOUR_SUBSCRIPTION_KEY", "YOUR_SUBSCRIPTION_REGION");
79-
}
80-
```
81-
82-
## Create a Speech configuration
83-
84-
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses the key and Azure region for your Cognitive Services prediction resource.
85-
86-
* Replace `"YOUR_SUBSCRIPTION_KEY"` with your Cognitive Services prediction key.
87-
* Replace `"YOUR_SUBSCRIPTION_REGION"` with your Cognitive Services resource region.
88-
89-
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](/cpp/cognitive-services/speech/speechconfig).
90-
91-
## Initialize an IntentRecognizer
92-
93-
Now create an `IntentRecognizer`. Insert this code right below your Speech configuration.
94-
95-
```cpp
96-
auto intentRecognizer = IntentRecognizer::FromConfig(config);
97-
```
98-
99-
## Add some intents
100-
101-
You need to associate some patterns with a `PatternMatchingModel` and apply it to the `IntentRecognizer`.
102-
We will start by creating a `PatternMatchingModel` and adding a few intents to it. A PatternMatchingIntent is a struct so we will just use the in-line syntax.
103-
104-
> [!Note]
105-
> We can add multiple patterns to an `Intent`.
106-
107-
```cpp
108-
auto model = PatternMatchingModel::FromId("myNewModel");
109-
110-
model->Intents.push_back({"Take me to floor {floorName}.", "Go to floor {floorName}."} , "ChangeFloors");
111-
model->Intents.push_back({"{action} the door."}, "OpenCloseDoor");
112-
```
113-
114-
## Add some custom entities
115-
116-
To take full advantage of the pattern matcher you can customize your entities. We will make "floorName" a list of the available floors.
117-
118-
```cpp
119-
model->Entities.push_back({ "floorName" , Intent::EntityType::List, Intent::EntityMatchMode::Strict, {"one","two", "lobby", "ground floor"} });
120-
```
121-
122-
## Apply our model to the Recognizer
123-
124-
Now it is necessary to apply the model to the `IntentRecognizer`. It is possible to use multiple models at once so the API takes a collection of models.
125-
126-
```cpp
127-
std::vector<std::shared_ptr<LanguageUnderstandingModel>> collecton;
128-
129-
collection.push_back(model);
130-
intentRecognizer->ApplyLanguageModels(collection);
131-
```
132-
133-
## Recognize an intent
134-
135-
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method asks the Speech service to recognize speech in a single phrase, and stop recognizing speech once the phrase is identified. For simplicity we'll wait on the future returned to complete.
136-
137-
Insert this code below your intents:
138-
139-
```cpp
140-
std::cout << "Say something ..." << std::endl;
141-
auto result = intentRecognizer->RecognizeOnceAsync().get();
142-
```
143-
144-
## Display the recognition results (or errors)
145-
146-
When the recognition result is returned by the Speech service, let's just print the result.
147-
148-
Insert this code below `auto result = intentRecognizer->RecognizeOnceAsync().get();`:
149-
150-
```cpp
151-
switch (result->Reason)
152-
{
153-
case ResultReason::RecognizedSpeech:
154-
std::cout << "RECOGNIZED: Text = " << result->Text.c_str() << std::endl;
155-
std::cout << "NO INTENT RECOGNIZED!" << std::endl;
156-
break;
157-
case ResultReason::RecognizedIntent:
158-
std::cout << "RECOGNIZED: Text = " << result->Text.c_str() << std::endl;
159-
std::cout << " Intent Id = " << result->IntentId.c_str() << std::endl;
160-
auto entities = result->GetEntities();
161-
if (entities.find("floorName") != entities.end())
162-
{
163-
std::cout << " Floor name: = " << entities["floorName"].c_str() << std::endl;
164-
}
165-
166-
if (entities.find("action") != entities.end())
167-
{
168-
std::cout << " Action: = " << entities["action"].c_str() << std::endl;
169-
}
170-
171-
break;
172-
case ResultReason::NoMatch:
173-
{
174-
auto noMatch = NoMatchDetails::FromResult(result);
175-
switch (noMatch->Reason)
176-
{
177-
case NoMatchReason::NotRecognized:
178-
std::cout << "NOMATCH: Speech was detected, but not recognized." << std::endl;
179-
break;
180-
case NoMatchReason::InitialSilenceTimeout:
181-
std::cout << "NOMATCH: The start of the audio stream contains only silence, and the service timed out waiting for speech." << std::endl;
182-
break;
183-
case NoMatchReason::InitialBabbleTimeout:
184-
std::cout << "NOMATCH: The start of the audio stream contains only noise, and the service timed out waiting for speech." << std::endl;
185-
break;
186-
case NoMatchReason::KeywordNotRecognized:
187-
std::cout << "NOMATCH: Keyword not recognized" << std::endl;
188-
break;
189-
}
190-
break;
191-
}
192-
case ResultReason::Canceled:
193-
{
194-
auto cancellation = CancellationDetails::FromResult(result);
195-
196-
if (!cancellation->ErrorDetails.empty())
197-
{
198-
std::cout << "CANCELED: ErrorDetails=" << cancellation->ErrorDetails.c_str() << std::endl;
199-
std::cout << "CANCELED: Did you update the subscription info?" << std::endl;
200-
}
201-
}
202-
default:
203-
break;
204-
}
205-
```
206-
207-
## Check your code
208-
209-
At this point, your code should look like this:
210-
211-
```cpp
212-
#include <iostream>
213-
#include <speechapi_cxx.h>
214-
215-
using namespace Microsoft::CognitiveServices::Speech;
216-
using namespace Microsoft::CognitiveServices::Speech::Intent;
217-
218-
int main()
219-
{
220-
auto config = SpeechConfig::FromSubscription("YOUR_SUBSCRIPTION_KEY", "YOUR_SUBSCRIPTION_REGION");
221-
auto intentRecognizer = IntentRecognizer::FromConfig(config);
222-
223-
auto model = PatternMatchingModel::FromId("myNewModel");
224-
225-
model->Intents.push_back({"Take me to floor {floorName}.", "Go to floor {floorName}."} , "ChangeFloors");
226-
model->Intents.push_back({"{action} the door."}, "OpenCloseDoor");
227-
228-
model->Entities.push_back({ "floorName" , Intent::EntityType::List, Intent::EntityMatchMode::Strict, {"one","two", "lobby", "ground floor"} });
229-
230-
std::vector<std::shared_ptr<LanguageUnderstandingModel>> collecton;
231-
232-
collection.push_back(model);
233-
intentRecognizer->ApplyLanguageModels(collection);
234-
235-
std::cout << "Say something ..." << std::endl;
236-
237-
auto result = intentRecognizer->RecognizeOnceAsync().get();
238-
239-
switch (result->Reason)
240-
{
241-
case ResultReason::RecognizedSpeech:
242-
std::cout << "RECOGNIZED: Text = " << result->Text.c_str() << std::endl;
243-
std::cout << "NO INTENT RECOGNIZED!" << std::endl;
244-
break;
245-
case ResultReason::RecognizedIntent:
246-
std::cout << "RECOGNIZED: Text = " << result->Text.c_str() << std::endl;
247-
std::cout << " Intent Id = " << result->IntentId.c_str() << std::endl;
248-
auto entities = result->GetEntities();
249-
if (entities.find("floorName") != entities.end())
250-
{
251-
std::cout << " Floor name: = " << entities["floorName"].c_str() << std::endl;
252-
}
253-
254-
if (entities.find("action") != entities.end())
255-
{
256-
std::cout << " Action: = " << entities["action"].c_str() << std::endl;
257-
}
258-
259-
break;
260-
case ResultReason::NoMatch:
261-
{
262-
auto noMatch = NoMatchDetails::FromResult(result);
263-
switch (noMatch->Reason)
264-
{
265-
case NoMatchReason::NotRecognized:
266-
std::cout << "NOMATCH: Speech was detected, but not recognized." << std::endl;
267-
break;
268-
case NoMatchReason::InitialSilenceTimeout:
269-
std::cout << "NOMATCH: The start of the audio stream contains only silence, and the service timed out waiting for speech." << std::endl;
270-
break;
271-
case NoMatchReason::InitialBabbleTimeout:
272-
std::cout << "NOMATCH: The start of the audio stream contains only noise, and the service timed out waiting for speech." << std::endl;
273-
break;
274-
case NoMatchReason::KeywordNotRecognized:
275-
std::cout << "NOMATCH: Keyword not recognized." << std::endl;
276-
break;
277-
}
278-
break;
279-
}
280-
case ResultReason::Canceled:
281-
{
282-
auto cancellation = CancellationDetails::FromResult(result);
283-
284-
if (!cancellation->ErrorDetails.empty())
285-
{
286-
std::cout << "CANCELED: ErrorDetails=" << cancellation->ErrorDetails.c_str() << std::endl;
287-
std::cout << "CANCELED: Did you update the subscription info?" << std::endl;
288-
}
289-
}
290-
default:
291-
break;
292-
}
293-
}
294-
```
295-
## Build and run your app
296-
297-
Now you're ready to build your app and test our speech recognition using the Speech service.
298-
299-
1. **Compile the code** - From the menu bar of Visual Studio, choose **Build** > **Build Solution**.
300-
2. **Start your app** - From the menu bar, choose **Debug** > **Start Debugging** or press <kbd>F5</kbd>.
301-
3. **Start recognition** - It will prompt you to say something. The default language is English. Your speech is sent to the Speech service, transcribed as text, and rendered in the console.
302-
303-
For example if you say "Take me to floor 2", this should be the output:
304-
305-
```
306-
Say something ...
307-
RECOGNIZED: Text = Take me to floor 2.
308-
Intent Id = ChangeFloors
309-
Floor name: = 2
310-
```
311-
312-
Another example if you say "Take me to floor 7", this should be the output:
313-
314-
```
315-
Say something ...
316-
RECOGNIZED: Text = Take me to floor 7.
317-
NO INTENT RECOGNIZED!
318-
```
53+
::: zone pivot="programming-language-csharp"
54+
[!INCLUDE [Header](includes/quickstarts/intent-recognition/csharp/header.md)]
55+
[!INCLUDE [csharp](includes/quickstarts/intent-recognition/csharp/pattern-matching.md)]
56+
::: zone-end
31957

320-
The Intent ID is empty because 7 was not in our list.
58+
::: zone pivot="programming-language-cpp"
59+
[!INCLUDE [Header](includes/quickstarts/intent-recognition/cpp/header.md)]
60+
[!INCLUDE [cpp](includes/quickstarts/intent-recognition/cpp/pattern-matching.md)]
61+
::: zone-end

0 commit comments

Comments
 (0)