Skip to content

Commit b36a174

Browse files
committed
Edit remaining includes
1 parent 6218aa2 commit b36a174

File tree

9 files changed

+53
-56
lines changed

9 files changed

+53
-56
lines changed

articles/ai-services/speech-service/includes/how-to/recognize-speech/cli.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 09/01/2023
5+
ms.date: 08/13/2024
66
ms.author: eur
77
---
88

@@ -21,11 +21,11 @@ spx recognize --microphone
2121
> [!NOTE]
2222
> The Speech CLI defaults to English. You can choose a different language [from the speech to text table](../../../../language-support.md?tabs=stt). For example, add `--source de-DE` to recognize German speech.
2323
24-
Speak into the microphone, and you can see transcription of your words into text in real-time. The Speech CLI stops after a period of silence, or when you select **Ctrl+C**.
24+
Speak into the microphone, and you can see transcription of your words into text in real time. The Speech CLI stops after a period of silence, or when you select **Ctrl+C**.
2525

2626
## Recognize speech from a file
2727

28-
The Speech CLI can recognize speech in many file formats and natural languages. In this example, you can use any *.wav* file (16 KHz or 8 KHz, 16-bit, and mono PCM) that contains English speech. Or if you want a quick sample, download the <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/whatstheweatherlike.wav" download="whatstheweatherlike" target="_blank">whatstheweatherlike.wav <span class="docon docon-download x-hidden-focus"></span></a> file, and copy it to the same directory as the Speech CLI binary file.
28+
The Speech CLI can recognize speech in many file formats and natural languages. In this example, you can use any *.wav* file (16 kHz or 8 kHz, 16-bit, and mono PCM) that contains English speech. Or if you want a quick sample, download the file [whatstheweatherlike.wav](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/whatstheweatherlike.wav), and copy it to the same directory as the Speech CLI binary file.
2929

3030
Use the following command to run the Speech CLI to recognize speech found in the audio file:
3131

@@ -42,5 +42,4 @@ The Speech CLI shows a text transcription of the speech on the screen.
4242

4343
Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and region.
4444

45-
For more information about containers, see [Host URLs](../../../speech-container-howto.md#host-urls) in Install and run Speech containers with Docker.
46-
45+
For more information about containers, see Host URLs in [Install and run Speech containers with Docker](../../../speech-container-howto.md#host-urls).

articles/ai-services/speech-service/includes/how-to/recognize-speech/cpp.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -192,13 +192,13 @@ A common task for speech recognition is specifying the input (or source) languag
192192
speechConfig->SetSpeechRecognitionLanguage("de-DE");
193193
```
194194
195-
[`SetSpeechRecognitionLanguage`](/cpp/cognitive-services/speech/speechconfig#setspeechrecognitionlanguage) is a parameter that takes a string as an argument. For a list of supported locales, see [Language and voice support for the Speech service](../../../language-support.md?tabs=stt).
195+
[`SetSpeechRecognitionLanguage`](/cpp/cognitive-services/speech/speechconfig#setspeechrecognitionlanguage) is a parameter that takes a string as an argument. For a list of supported locales, see [Language and voice support for the Speech service](../../../language-support.md).
196196
197197
## Language identification
198198
199199
You can use language identification with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
200200
201-
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-cpp#use-speech-to-text).
201+
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-cpp).
202202
203203
## Use a custom endpoint
204204

articles/ai-services/speech-service/includes/how-to/recognize-speech/csharp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -273,7 +273,7 @@ The [`SpeechRecognitionLanguage`](/dotnet/api/microsoft.cognitiveservices.speech
273273

274274
You can use language identification with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
275275

276-
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-csharp#use-speech-to-text).
276+
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-csharp).
277277

278278
## Use a custom endpoint
279279

articles/ai-services/speech-service/includes/how-to/recognize-speech/java.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 09/01/2023
5+
ms.date: 08/13/2024
66
ms.custom: devx-track-java
77
ms.author: eur
88
---
@@ -15,8 +15,8 @@ ms.author: eur
1515

1616
To call the Speech service by using the Speech SDK, you need to create a [SpeechConfig](/java/api/com.microsoft.cognitiveservices.speech.speechconfig) instance. This class includes information about your subscription, like your key and associated region, endpoint, host, or authorization token.
1717

18-
1. Create a Speech resource in the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices).
19-
1. Create a `SpeechConfig` instance by using your key and region.
18+
1. Create a Speech resource in the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices). Get the Speech resource key and region.
19+
1. Create a `SpeechConfig` instance by using your Speech key and region.
2020

2121
```java
2222
import com.microsoft.cognitiveservices.speech.*;
@@ -26,7 +26,7 @@ import java.util.concurrent.Future;
2626

2727
public class Program {
2828
public static void main(String[] args) throws InterruptedException, ExecutionException {
29-
SpeechConfig speechConfig = SpeechConfig.fromSubscription("<paste-your-subscription-key>", "<paste-your-region>");
29+
SpeechConfig speechConfig = SpeechConfig.fromSubscription("<paste-your-speech-key>", "<paste-your-region>");
3030
}
3131
}
3232
```
@@ -52,7 +52,7 @@ import java.util.concurrent.Future;
5252

5353
public class Program {
5454
public static void main(String[] args) throws InterruptedException, ExecutionException {
55-
SpeechConfig speechConfig = SpeechConfig.fromSubscription("<paste-your-subscription-key>", "<paste-your-region>");
55+
SpeechConfig speechConfig = SpeechConfig.fromSubscription("<paste-your-speech-key>", "<paste-your-region>");
5656
fromMic(speechConfig);
5757
}
5858

@@ -82,7 +82,7 @@ import java.util.concurrent.Future;
8282

8383
public class Program {
8484
public static void main(String[] args) throws InterruptedException, ExecutionException {
85-
SpeechConfig speechConfig = SpeechConfig.fromSubscription("<paste-your-subscription-key>", "<paste-your-region>");
85+
SpeechConfig speechConfig = SpeechConfig.fromSubscription("<paste-your-speech-key>", "<paste-your-region>");
8686
fromFile(speechConfig);
8787
}
8888

@@ -216,14 +216,14 @@ config.setSpeechRecognitionLanguage("fr-FR");
216216

217217
You can use language identification with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
218218

219-
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-java#use-speech-to-text).
219+
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-java).
220220

221221
## Use a custom endpoint
222222

223223
With [custom speech](../../../custom-speech-overview.md), you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint. The following example shows how to set a custom endpoint:
224224

225225
```java
226-
SpeechConfig speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey", "YourServiceRegion");
226+
SpeechConfig speechConfig = SpeechConfig.FromSubscription("YourSpeechKey", "YourServiceRegion");
227227
speechConfig.setEndpointId("YourEndpointId");
228228
SpeechRecognizer speechRecognizer = new SpeechRecognizer(speechConfig);
229229
```

articles/ai-services/speech-service/includes/how-to/recognize-speech/javascript.md

Lines changed: 15 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 09/01/2023
5+
ms.date: 08/13/2024
66
ms.author: eur
77
ms.custom: devx-track-js
88
---
@@ -11,12 +11,12 @@ ms.custom: devx-track-js
1111

1212
[!INCLUDE [Introduction](intro.md)]
1313

14-
## Create a speech configuration
14+
## Create a speech configuration instance
1515

16-
To call the Speech service by using the Speech SDK, you need to create a [`SpeechConfig`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig) instance. This class includes information about your subscription, like your key and associated location/region, endpoint, host, or authorization token.
16+
To call the Speech service by using the Speech SDK, you need to create a [`SpeechConfig`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig) instance. This class includes information about your subscription, like your key and associated region, endpoint, host, or authorization token.
1717

18-
1. Create a `SpeechConfig` instance by using your key and location/region.
19-
1. Create a Speech resource on the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices).
18+
1. Create a Speech resource in the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices). Get the Speech resource key and region.
19+
1. Create a `SpeechConfig` instance by using the following code. Replace `YourSpeechKey` and `YourSpeechRegion` with your Speech resource key and region.
2020

2121
```javascript
2222
const speechConfig = sdk.SpeechConfig.fromSubscription("YourSpeechKey", "YourSpeechRegion");
@@ -36,7 +36,7 @@ You can initialize `SpeechConfig` in a few other ways:
3636
Recognizing speech from a microphone isn't supported in Node.js. It's supported only in a browser-based JavaScript environment. For more information, see the [React sample](https://github.com/Azure-Samples/AzureSpeechReactSample) and the [implementation of speech to text from a microphone](https://github.com/Azure-Samples/AzureSpeechReactSample/blob/main/src/App.js#L29) on GitHub. The React sample shows design patterns for the exchange and management of authentication tokens. It also shows the capture of audio from a microphone or file for speech to text conversions.
3737

3838
> [!NOTE]
39-
> If you want to use a *specific* audio input device, you need to specify the device ID in the `AudioConfig` object. For more information, see [Select an audio input device with the Speech SDK](../../../how-to-select-audio-input-devices.md).
39+
> If you want to use a *specific* audio input device, you need to specify the device ID in `AudioConfig`. To learn how to get the device ID, see [Select an audio input device with the Speech SDK](../../../how-to-select-audio-input-devices.md).
4040
4141
## Recognize speech from a file
4242

@@ -91,7 +91,7 @@ function fromStream() {
9191
fromStream();
9292
```
9393

94-
Using a push stream as input assumes that the audio data is a raw pulse-code modulation (PCM) data that skips any headers. The API still works in certain cases if the header wasn't skipped. For the best results, consider implementing logic to read off the headers so that `fs` begins at the *start of the audio data*.
94+
Using a push stream as input assumes that the audio data is raw pulse-code modulation (PCM) data that skips any headers. The API still works in certain cases if the header isn't skipped. For the best results, consider implementing logic to read off the headers so that `fs` begins at the *start of the audio data*.
9595

9696
## Handle errors
9797

@@ -126,7 +126,8 @@ switch (result.reason) {
126126

127127
The previous examples use single-shot recognition, which recognizes a single utterance. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed.
128128

129-
In contrast, you can use continuous recognition when you want to control when to stop recognizing. It requires you to subscribe to the `Recognizing`, `Recognized`, and `Canceled` events to get the recognition results. To stop recognition, you must call [`stopContinuousRecognitionAsync`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer#stopcontinuousrecognitionasync). Here's an example of how continuous recognition is performed on an audio input file.
129+
In contrast, you can use continuous recognition when you want to control when to stop recognizing. It requires you to subscribe to the `Recognizing`, `Recognized`, and `Canceled` events to get the recognition results. To stop recognition, you must call [`stopContinuousRecognitionAsync`]
130+
(/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer#microsoft-cognitiveservices-speech-sdk-speechrecognizer-stopcontinuousrecognitionasync). Here's an example of how continuous recognition is performed on an audio input file.
130131

131132
Start by defining the input and initializing [`SpeechRecognizer`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer):
132133

@@ -173,7 +174,8 @@ speechRecognizer.sessionStopped = (s, e) => {
173174
};
174175
```
175176

176-
With everything set up, call [`startContinuousRecognitionAsync`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer#startcontinuousrecognitionasync) to start recognizing:
177+
With everything set up, call [`startContinuousRecognitionAsync`]
178+
(/javascript/api/microsoft-cognitiveservices-speech-sdk/speechrecognizer#microsoft-cognitiveservices-speech-sdk-speechrecognizer-startkeywordrecognitionasync) to start recognizing:
177179

178180
```javascript
179181
speechRecognizer.startContinuousRecognitionAsync();
@@ -190,13 +192,13 @@ A common task for speech recognition is specifying the input (or source) languag
190192
speechConfig.speechRecognitionLanguage = "it-IT";
191193
```
192194

193-
The [`speechRecognitionLanguage`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#speechrecognitionlanguage) property expects a language-locale format string. For more information, see the [list of supported speech to text locales](../../../language-support.md?tabs=stt).
195+
The [`speechRecognitionLanguage`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#microsoft-cognitiveservices-speech-sdk-speechconfig-speechrecognitionlanguage) property expects a language-locale format string. For a list of supported locales, see [Language and voice support for the Speech service](../../../language-support.md).
194196

195197
## Language identification
196198

197-
You can use [language identification](../../../language-identification.md?pivots=programming-language-javascript#use-speech-to-text) with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
199+
You can use language identification with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
198200

199-
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-javascript#use-speech-to-text).
201+
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-javascript).
200202

201203
## Use a custom endpoint
202204

@@ -212,5 +214,4 @@ var speechRecognizer = new SpeechSDK.SpeechRecognizer(speechConfig);
212214

213215
Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and region.
214216

215-
For more information about containers, see [Host URLs](../../../speech-container-howto.md#host-urls) in Install and run Speech containers with Docker.
216-
217+
For more information about containers, see Host URLs in [Install and run Speech containers with Docker](../../../speech-container-howto.md#host-urls).

articles/ai-services/speech-service/includes/how-to/recognize-speech/objectivec.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 9/01/2023
5+
ms.date: 08/13/2024
66
ms.author: eur
77
---
88

@@ -34,5 +34,4 @@ SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] init:speech
3434
3535
Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and region.
3636
37-
For more information about containers, see [Host URLs](../../../speech-container-howto.md#host-urls) in Install and run Speech containers with Docker.
38-
37+
For more information about containers, see Host URLs in [Install and run Speech containers with Docker](../../../speech-container-howto.md#host-urls).

0 commit comments

Comments
 (0)