Skip to content

Commit 36e371d

Browse files
Merge pull request #42864 from v-kydela/v-kydela/speech-intent-recognition
Fix typos in Cognitive Speech SDK intent recognition quickstarts
2 parents 9a4ca2a + 1d6f071 commit 36e371d

File tree

5 files changed

+31
-26
lines changed

5 files changed

+31
-26
lines changed

articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/cpp/windows.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ zone_pivot_groups: programming-languages-set-two
1818
Before you get started, make sure to:
1919

2020
> [!div class="checklist"]
21+
>
2122
> * [Create an Azure Speech Resource](../../../../get-started.md)
2223
> * [Create a LUIS application and get an endpoint key](../../../../quickstarts/create-luis.md)
2324
> * [Setup your development environment](../../../../quickstarts/setup-platform.md?tabs=windows)
@@ -37,16 +38,16 @@ Let's add some code that works as a skeleton for our project. Make note that you
3738

3839
## Create a Speech configuration
3940

40-
Before you can initialize a `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoing key and region. Insert this code in the `recognizeIntent()` method.
41+
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoint key and region. Insert this code in the `recognizeIntent()` method.
4142

4243
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig).
4344

4445
> [!NOTE]
45-
> It is important to use the LUIS Endpoint key and not the Starter or Authroing keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
46+
> It is important to use the LUIS Endpoint key and not the Starter or Authoring keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
4647
4748
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=25)]
4849

49-
## Initialize a IntentRecognizer
50+
## Initialize an IntentRecognizer
5051

5152
Now, let's create an `IntentRecognizer`. Insert this code in the `recognizeIntent()` method, right below your Speech configuration.
5253
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=28)]
@@ -58,8 +59,8 @@ You now need to associate a `LanguageUnderstandingModel` with the intent recogni
5859

5960
## Recognize an intent
6061

61-
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop reconizing speech.
62-
For similicity we'll wait on the future returned to complete.
62+
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
63+
For simplicity we'll wait on the future returned to complete.
6364

6465
Inside the using statement, add this code:
6566
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=44)]
@@ -73,15 +74,15 @@ Inside the using statement, below `RecognizeOnceAsync()`, add this code:
7374

7475
## Check your code
7576

76-
At this point, your code should look like this:
77+
At this point, your code should look like this:
7778
(We've added some comments to this version)
7879
[!code-cpp[](~/samples-cognitive-services-speech-sdk/quickstart/cpp/windows/intent-recognition/helloworld/helloworld.cpp?range=6-81)]
7980

8081
## Build and run your app
8182

8283
Now you're ready to build your app and test our speech recognition using the Speech service.
8384

84-
1. **Compile the code** - From the menu bar of Visual Stuio, choose **Build** > **Build Solution**.
85+
1. **Compile the code** - From the menu bar of Visual Studio, choose **Build** > **Build Solution**.
8586
2. **Start your app** - From the menu bar, choose **Debug** > **Start Debugging** or press **F5**.
8687
3. **Start recognition** - It'll prompt you to speak a phrase in English. Your speech is sent to the Speech service, transcribed as text, and rendered in the console.
8788

articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/csharp/dotnet.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ zone_pivot_groups: programming-languages-set-two
1818
Before you get started, make sure to:
1919

2020
> [!div class="checklist"]
21+
>
2122
> * [Create an Azure Speech Resource](../../../../get-started.md)
2223
> * [Create a LUIS application and get an endpoint key](../../../../quickstarts/create-luis.md)
2324
> * [Setup your development environment](../../../../quickstarts/setup-platform.md?tabs=dotnet)
@@ -37,18 +38,18 @@ Let's add some code that works as a skeleton for our project. Make note that you
3738

3839
## Create a Speech configuration
3940

40-
Before you can initialize a `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoing key and region. Insert this code in the `RecognizeIntentAsync()` method.
41+
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoint key and region. Insert this code in the `RecognizeIntentAsync()` method.
4142

4243
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig?view=azure-dotnet).
4344

4445
> [!NOTE]
45-
> It is important to use the LUIS Endpoint key and not the Starter or Authroing keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
46+
> It is important to use the LUIS Endpoint key and not the Starter or Authoring keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
4647
4748
[!code-csharp[](~/samples-cognitive-services-speech-sdk/quickstart/csharp/dotnet/intent-recognition/helloworld/Program.cs?range=26)]
4849

49-
## Initialize a IntentRecognizer
50+
## Initialize an IntentRecognizer
5051

51-
Now, let's create a `IntentRecognizer`. This object is created inside of a using statement to ensure the proper release of unmanaged resources. Insert this code in the `RecognizeIntentAsync()` method, right below your Speech configuration.
52+
Now, let's create an `IntentRecognizer`. This object is created inside of a using statement to ensure the proper release of unmanaged resources. Insert this code in the `RecognizeIntentAsync()` method, right below your Speech configuration.
5253
[!code-csharp[](~/samples-cognitive-services-speech-sdk/quickstart/csharp/dotnet/intent-recognition/helloworld/Program.cs?range=28-30,76)]
5354

5455
## Add a LanguageUnderstandingModel and Intents
@@ -58,7 +59,7 @@ You now need to associate a `LanguageUnderstandingModel` with the intent recogni
5859

5960
## Recognize an intent
6061

61-
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop reconizing speech.
62+
From the `IntentRecognizer` object, you're going to call the `RecognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
6263

6364
Inside the using statement, add this code:
6465
[!code-csharp[](~/samples-cognitive-services-speech-sdk/quickstart/csharp/dotnet/intent-recognition/helloworld/Program.cs?range=46)]
@@ -72,7 +73,7 @@ Inside the using statement, below `RecognizeOnceAsync()`, add this code:
7273

7374
## Check your code
7475

75-
At this point, your code should look like this:
76+
At this point, your code should look like this:
7677
(We've added some comments to this version)
7778
[!code-csharp[](~/samples-cognitive-services-speech-sdk/quickstart/csharp/dotnet/intent-recognition/helloworld/Program.cs?range=5-86)]
7879

articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/header.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,8 @@ zone_pivot_groups: programming-languages-set-two
1515

1616
In this quickstart you will use the [Speech SDK](~/articles/cognitive-services/speech-service/speech-sdk.md) to interactively recognize speech from audio data captured from a microphone. After satisfying a few prerequisites, recognizing speech from a microphone only takes four steps:
1717
> [!div class="checklist"]
18+
>
1819
> * Create a ````SpeechConfig```` object from your subscription key and region.
19-
> * Create a ````IntentRecognizer```` object using the ````SpeechConfig```` object from above.
20+
> * Create an ````IntentRecognizer```` object using the ````SpeechConfig```` object from above.
2021
> * Using the ````IntentRecognizer```` object, start the recognition process for a single utterance.
21-
> * Inspect the ````IntentRecognitionResult```` returned.
22+
> * Inspect the ````IntentRecognitionResult```` returned.

articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/java/jre.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ zone_pivot_groups: programming-languages-set-two
1818
Before you get started, make sure to:
1919

2020
> [!div class="checklist"]
21+
>
2122
> * [Create an Azure Speech Resource](../../../../get-started.md)
2223
> * [Create a LUIS application and get an endpoint key](../../../../quickstarts/create-luis.md)
2324
> * [Setup your development environment](../../../../quickstarts/setup-platform.md?tabs=jre)
@@ -34,18 +35,18 @@ Let's add some code that works as a skeleton for our project.
3435

3536
## Create a Speech configuration
3637

37-
Before you can initialize a `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoing key and region. Insert this code in the try / catch block in main
38+
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoint key and region. Insert this code in the try / catch block in main
3839

3940
This sample uses the `FromSubscription()` method to build the `SpeechConfig`. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig?view=azure-dotnet).
4041

4142
> [!NOTE]
42-
> It is important to use the LUIS Endpoint key and not the Starter or Authroing keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
43+
> It is important to use the LUIS Endpoint key and not the Starter or Authoring keys as only the Endpoint key is valid for speech to intent recognition. See [Create a LUIS application and get an endpoint key](~/articles/cognitive-services/Speech-Service/quickstarts/create-luis.md) for instructions on how to get the correct key.
4344
4445
[!code-java[](~/samples-cognitive-services-speech-sdk/quickstart/java/jre/intent-recognition/src/speechsdk/quickstart/Main.java?range=27)]
4546

46-
## Initialize a IntentRecognizer
47+
## Initialize an IntentRecognizer
4748

48-
Now, let's create a `IntentRecognizer`. Insert this code right below your Speech configuration.
49+
Now, let's create an `IntentRecognizer`. Insert this code right below your Speech configuration.
4950
[!code-java[](~/samples-cognitive-services-speech-sdk/quickstart/java/jre/intent-recognition/src/speechsdk/quickstart/Main.java?range=30)]
5051

5152
## Add a LanguageUnderstandingModel and Intents
@@ -55,7 +56,7 @@ You now need to associate a `LanguageUnderstandingModel` with the intent recogni
5556

5657
## Recognize an intent
5758

58-
From the `IntentRecognizer` object, you're going to call the `recognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop reconizing speech.
59+
From the `IntentRecognizer` object, you're going to call the `recognizeOnceAsync()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
5960

6061
[!code-java[](~/samples-cognitive-services-speech-sdk/quickstart/java/jre/intent-recognition/src/speechsdk/quickstart/Main.java?range=41)]
6162

@@ -73,7 +74,7 @@ It's important to release the speech resources when you're done using them. Inse
7374

7475
## Check your code
7576

76-
At this point, your code should look like this:
77+
At this point, your code should look like this:
7778
(We've added some comments to this version)
7879
[!code-java[](~/samples-cognitive-services-speech-sdk/quickstart/java/jre/intent-recognition/src/speechsdk/quickstart/Main.java?range=6-76)]
7980

articles/cognitive-services/Speech-Service/includes/quickstarts/intent-recognition/python/python.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,7 @@ zone_pivot_groups: programming-languages-set-two
1818
Before you get started, make sure to:
1919

2020
> [!div class="checklist"]
21+
>
2122
> * [Create an Azure Speech Resource](../../../../get-started.md)
2223
> * [Create a LUIS application and get an endpoint key](../../../../quickstarts/create-luis.md)
2324
> * [Setup your development environment](../../../../quickstarts/setup-platform.md)
@@ -34,7 +35,7 @@ Let's add some code that works as a skeleton for our project.
3435

3536
## Create a Speech configuration
3637

37-
Before you can initialize a `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoing key and region. Insert this code next.
38+
Before you can initialize an `IntentRecognizer` object, you need to create a configuration that uses your LUIS Endpoint key and region. Insert this code next.
3839

3940
This sample constructs the `SpeechConfig` object using LUIS key and region. For a full list of available methods, see [SpeechConfig Class](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig).
4041

@@ -43,9 +44,9 @@ This sample constructs the `SpeechConfig` object using LUIS key and region. For
4344
4445
[!code-python[](~/samples-cognitive-services-speech-sdk/quickstart/python/intent-recognition/quickstart.py?range=12)]
4546

46-
## Initialize a IntentRecognizer
47+
## Initialize an IntentRecognizer
4748

48-
Now, let's create a `IntentRecognizer`. Insert this code right below your Speech configuration.
49+
Now, let's create an `IntentRecognizer`. Insert this code right below your Speech configuration.
4950
[!code-python[](~/samples-cognitive-services-speech-sdk/quickstart/python/intent-recognition/quickstart.py?range=15)]
5051

5152
## Add a LanguageUnderstandingModel and Intents
@@ -55,7 +56,7 @@ You now need to associate a `LanguageUnderstandingModel` with the intent recogni
5556

5657
## Recognize an intent
5758

58-
From the `IntentRecognizer` object, you're going to call the `recognize_once()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop reconizing speech.
59+
From the `IntentRecognizer` object, you're going to call the `recognize_once()` method. This method lets the Speech service know that you're sending a single phrase for recognition, and that once the phrase is identified to stop recognizing speech.
5960
[!code-python[](~/samples-cognitive-services-speech-sdk/quickstart/python/intent-recognition/quickstart.py?range=35)]
6061

6162
## Display the recognition results (or errors)
@@ -67,7 +68,7 @@ Inside the using statement, below your call to `recognize_once()`, add this code
6768

6869
## Check your code
6970

70-
At this point, your code should look like this:
71+
At this point, your code should look like this:
7172
(We've added some comments to this version)
7273
[!code-python[](~/samples-cognitive-services-speech-sdk/quickstart/python/intent-recognition/quickstart.py?range=5-47)]
7374

0 commit comments

Comments
 (0)