You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/includes/how-to/speech-to-text-basics/speech-to-text-basics-javascript.md
+41-33Lines changed: 41 additions & 33 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
author: trevorbye
3
3
ms.service: cognitive-services
4
4
ms.topic: include
5
-
ms.date: 04/14/2020
5
+
ms.date: 04/15/2020
6
6
ms.author: trbye
7
7
---
8
8
@@ -23,7 +23,15 @@ Additionally, depending on the target environment use one of the following:
For more information on `import`, see <ahref="https://javascript.info/import-export"target="_blank">export and import <spanclass="docon docon-navigate-external x-hidden-focus"></span></a>.
@@ -39,14 +47,14 @@ For more information on `require`, see <a href="https://nodejs.org/en/knowledge/
39
47
40
48
# [script](#tab/script)
41
49
42
-
Download and extract the <ahref="https://aka.ms/csspeech/jsbrowserpackage"target="_blank">JavaScript Speech SDK <spanclass="docon docon-navigate-external x-hidden-focus"></span></a> *microsoft.cognitiveservices.speech.sdk.bundle.js* file, and place it in a folder accessible to your HTML file.
50
+
Download and extract the <ahref="https://aka.ms/csspeech/jsbrowserpackage"target="_blank">JavaScript Speech SDK <spanclass="docon docon-navigate-external x-hidden-focus"></span></a> *microsoft.cognitiveservices.speech.bundle.js* file, and place it in a folder accessible to your HTML file.
> If you're targeting a web browser, and using the `<script>` tag; the `sdk` prefix is not needed. The `sdk` prefix is an alias we use to name our `import` or`require` module.
57
+
> If you're targeting a web browser, and using the `<script>` tag; the `sdk` prefix is not needed. The `sdk` prefix is an alias used to name the`require` module.
50
58
51
59
---
52
60
@@ -67,7 +75,7 @@ There are a few ways that you can initialize a [`SpeechConfig`](https://docs.mic
67
75
Let's take a look at how a [`SpeechConfig`](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig?view=azure-node-latest) is created using a key and region. See the [region support](https://docs.microsoft.com/azure/cognitive-services/speech-service/regions#speech-sdk) page to find your region identifier.
@@ -77,7 +85,7 @@ After you've created a [`SpeechConfig`](https://docs.microsoft.com/javascript/ap
77
85
If you're recognizing speech using your device's default microphone, here's what the [`SpeechRecognizer`](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer?view=azure-node-latest) should look like:
If you want to specify the audio input device, then you'll need to create an [`AudioConfig`](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig?view=azure-node-latest) and provide the `audioConfig` parameter when initializing your [`SpeechRecognizer`](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer?view=azure-node-latest).
@@ -88,15 +96,15 @@ If you want to specify the audio input device, then you'll need to create an [`A
If you want to provide an audio file instead of using a microphone, you'll still need to provide an `audioConfig`. However, this can only be done when targeting **Node.js** and when you create an [`AudioConfig`](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/audioconfig?view=azure-node-latest), instead of calling `fromDefaultMicrophoneInput`, you'll call `fromWavFileOutput` and pass the `filename` parameter.
console.log("CANCELED: Did you update the subscription info?");
145
153
}
146
154
break;
@@ -155,7 +163,7 @@ Continuous recognition is a bit more involved than single-shot recognition. It r
155
163
Let's start by defining the input and initializing a [`SpeechRecognizer`](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer?view=azure-node-latest):
We'll subscribe to the events sent from the [`SpeechRecognizer`](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer?view=azure-node-latest).
@@ -166,32 +174,32 @@ We'll subscribe to the events sent from the [`SpeechRecognizer`](https://docs.mi
166
174
*[`canceled`](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer?view=azure-node-latest#canceled): Signal for events containing canceled recognition results (indicating a recognition attempt that was canceled as a result or a direct cancellation request or, alternatively, a transport or protocol failure).
console.log("CANCELED: Did you update the subscription info?");
189
197
}
190
198
191
199
recognizer.stopContinuousRecognitionAsync();
192
200
};
193
201
194
-
recognizer.SessionStopped= (s, e) => {
202
+
recognizer.sessionStopped= (s, e) => {
195
203
console.log("\n Session stopped event.");
196
204
recognizer.stopContinuousRecognitionAsync();
197
205
};
@@ -229,7 +237,7 @@ The [`speechRecognitionLanguage`](https://docs.microsoft.com/javascript/api/micr
229
237
230
238
## Improve recognition accuracy
231
239
232
-
There are a few ways to improve recognition accuracy with the Speech SDK. Let's take a look at Phrase Lists. Phrase Lists are used to identify known phrases in audio data, like a person's name or a specific location. Single words or complete phrases can be added to a Phrase List. During recognition, an entry in a phrase list is used if an exact match for the entire phrase is included in the audio. If an exact match to the phrase is not found, recognition is not assisted.
240
+
There are a few ways to improve recognition accuracy with the Speech Let's take a look at Phrase Lists. Phrase Lists are used to identify known phrases in audio data, like a person's name or a specific location. Single words or complete phrases can be added to a Phrase List. During recognition, an entry in a phrase list is used if an exact match for the entire phrase is included in the audio. If an exact match to the phrase is not found, recognition is not assisted.
233
241
234
242
> [!IMPORTANT]
235
243
> The Phrase List feature is only available in English.
@@ -239,7 +247,7 @@ To use a phrase list, first create a [`PhraseListGrammar`](https://docs.microsof
239
247
Any changes to [`PhraseListGrammar`](https://docs.microsoft.com/javascript/api/microsoft-cognitiveservices-speech-sdk/phraselistgrammar?view=azure-node-latest) take effect on the next recognition or after a reconnection to the Speech service.
0 commit comments