Skip to content

Commit 7f66280

Browse files
authored
Merge pull request #250858 from MicrosoftDocs/main
Publish to live, Sunday 4:00PM PDT, 9/10
2 parents 8cac194 + b97872d commit 7f66280

32 files changed

+466
-200
lines changed

articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,8 +33,8 @@ To open the settings for an Azure AD role:
3333

3434
1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure AD roles** > **Roles**.
3535

36-
1. This page shows a list of Azure AD roles available in the tenant, including built-in and custom roles.
37-
:::image type="content" source="media/pim-how-to-change-default-settings/role-settings.png" alt-text="Screenshot that shows the list of Azure AD roles available in the tenant, including built-in and custom roles." lightbox="media/pim-how-to-change-default-settings/role-settings.png":::
36+
1. On this page you see a list of Azure AD roles available in the tenant, including built-in and custom roles.
37+
:::image type="content" source="media/pim-how-to-change-default-settings/role-settings.png" alt-text="Screenshot that shows the list of Azure AD roles available in the tenant, including built-in and custom roles." lightbox="media/pim-how-to-change-default-settings/role-settings.png":::
3838

3939
1. Select the role whose settings you want to configure.
4040

articles/ai-services/speech-service/how-to-recognize-speech.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: speech-service
1010
ms.topic: how-to
11-
ms.date: 09/16/2022
11+
ms.date: 09/01/2023
1212
ms.author: eur
1313
ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
1414
ms.custom: devx-track-extended-java, devx-track-go, devx-track-js, devx-track-python

articles/ai-services/speech-service/includes/how-to/recognize-speech/cli.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@
22
author: eric-urban
33
ms.service: cognitive-services
44
ms.topic: include
5-
ms.date: 09/08/2020
5+
ms.date: 09/01/2023
66
ms.author: eur
77
---
88

99
[!INCLUDE [Introduction](intro.md)]
1010

11-
## Speech to text from a microphone
11+
## Recognize speech from a microphone
1212

1313
Plug in and turn on your PC microphone. Turn off any apps that might also use the microphone. Some computers have a built-in microphone, whereas others require configuration of a Bluetooth device.
1414

@@ -21,11 +21,11 @@ spx recognize --microphone
2121
> [!NOTE]
2222
> The Speech CLI defaults to English. You can choose a different language [from the speech to text table](../../../../language-support.md?tabs=stt). For example, add `--source de-DE` to recognize German speech.
2323
24-
Speak into the microphone, and you see transcription of your words into text in real-time. The Speech CLI stops after a period of silence, or when you select Ctrl+C.
24+
Speak into the microphone, and you can see transcription of your words into text in real-time. The Speech CLI stops after a period of silence, or when you select **Ctrl+C**.
2525

26-
## Speech to text from an audio file
26+
## Recognize speech from a file
2727

28-
The Speech CLI can recognize speech in many file formats and natural languages. In this example, you can use any WAV file (16 KHz or 8 KHz, 16-bit, and mono PCM) that contains English speech. Or if you want a quick sample, download the <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/whatstheweatherlike.wav" download="whatstheweatherlike" target="_blank">whatstheweatherlike.wav <span class="docon docon-download x-hidden-focus"></span></a> file and copy it to the same directory as the Speech CLI binary file.
28+
The Speech CLI can recognize speech in many file formats and natural languages. In this example, you can use any *.wav* file (16 KHz or 8 KHz, 16-bit, and mono PCM) that contains English speech. Or if you want a quick sample, download the <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/whatstheweatherlike.wav" download="whatstheweatherlike" target="_blank">whatstheweatherlike.wav <span class="docon docon-download x-hidden-focus"></span></a> file, and copy it to the same directory as the Speech CLI binary file.
2929

3030
Use the following command to run the Speech CLI to recognize speech found in the audio file:
3131

@@ -42,5 +42,5 @@ The Speech CLI shows a text transcription of the speech on the screen.
4242

4343
Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and region.
4444

45-
For more information about containers, see the [speech containers](../../../speech-container-howto.md#host-urls) how-to guide.
45+
For more information about containers, see [Host URLs](../../../speech-container-howto.md#host-urls) in Install and run Speech containers with Docker.
4646

articles/ai-services/speech-service/includes/how-to/recognize-speech/cpp.md

Lines changed: 17 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: cognitive-services
44
ms.topic: include
5-
ms.date: 03/06/2020
5+
ms.date: 09/01/2023
66
ms.author: eur
77
---
88

@@ -14,7 +14,8 @@ ms.author: eur
1414

1515
To call the Speech service using the Speech SDK, you need to create a [`SpeechConfig`](/cpp/cognitive-services/speech/speechconfig) instance. This class includes information about your subscription, like your key and associated location/region, endpoint, host, or authorization token.
1616

17-
Create a `SpeechConfig` instance by using your key and region. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a multi-service resource](~/articles/ai-services/multi-service-resource.md?pivots=azportal).
17+
1. Create a `SpeechConfig` instance by using your key and region.
18+
1. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a multi-service resource](~/articles/ai-services/multi-service-resource.md?pivots=azportal).
1819

1920
```cpp
2021
using namespace std;
@@ -25,16 +26,16 @@ auto speechConfig = SpeechConfig::FromSubscription("YourSpeechKey", "YourSpeechR
2526
2627
You can initialize `SpeechConfig` in a few other ways:
2728
28-
* With an endpoint: pass in a Speech service endpoint. A key or authorization token is optional.
29-
* With a host: pass in a host address. A key or authorization token is optional.
30-
* With an authorization token: pass in an authorization token and the associated region.
29+
* Use an endpoint, and pass in a Speech service endpoint. A key or authorization token is optional.
30+
* Use a host, and pass in a host address. A key or authorization token is optional.
31+
* Use an authorization token with the associated region/location.
3132
3233
> [!NOTE]
33-
> Regardless of whether you're performing speech recognition, speech synthesis, translation, or intent recognition, you'll always create a configuration.
34+
> Regardless of whether you're performing speech recognition, speech synthesis, translation, or intent recognition, you always create a configuration.
3435
3536
## Recognize speech from a microphone
3637
37-
To recognize speech by using your device microphone, create an [`AudioConfig`](/cpp/cognitive-services/speech/audio-audioconfig) instance by using `FromDefaultMicrophoneInput()`. Then initialize [`SpeechRecognizer`](/cpp/cognitive-services/speech/speechrecognizer) by passing `audioConfig` and `config`.
38+
To recognize speech by using your device microphone, create an [`AudioConfig`](/cpp/cognitive-services/speech/audio-audioconfig) instance by using the `FromDefaultMicrophoneInput()` member function. Then initialize the[`SpeechRecognizer`](/cpp/cognitive-services/speech/speechrecognizer) object by passing `audioConfig` and `config`.
3839
3940
```cpp
4041
using namespace Microsoft::CognitiveServices::Speech::Audio;
@@ -47,11 +48,11 @@ auto result = speechRecognizer->RecognizeOnceAsync().get();
4748
cout << "RECOGNIZED: Text=" << result->Text << std::endl;
4849
```
4950

50-
If you want to use a *specific* audio input device, you need to specify the device ID in `AudioConfig`. Learn [how to get the device ID](../../../how-to-select-audio-input-devices.md) for your audio input device.
51+
If you want to use a *specific* audio input device, you need to specify the device ID in `AudioConfig`. For more information on how to get the device ID for your audio input device, see [Select an audio input device with the Speech SDK](../../../how-to-select-audio-input-devices.md)
5152

5253
## Recognize speech from a file
5354

54-
If you want to recognize speech from an audio file instead of using a microphone, you still need to create an `AudioConfig` instance. But instead of calling `FromDefaultMicrophoneInput()`, you call `FromWavFileInput()` and pass the file path:
55+
If you want to recognize speech from an audio file instead of using a microphone, you still need to create an `AudioConfig` instance. But for this case you don't call `FromDefaultMicrophoneInput()`. You call `FromWavFileInput()` and pass the file path:
5556

5657
```cpp
5758
using namespace Microsoft::CognitiveServices::Speech::Audio;
@@ -78,8 +79,8 @@ auto result = speechRecognizer->RecognizeOnceAsync().get();
7879
You need to write some code to handle the result. This sample evaluates [`result->Reason`](/cpp/cognitive-services/speech/recognitionresult#reason) and:
7980

8081
* Prints the recognition result: `ResultReason::RecognizedSpeech`.
81-
* If there is no recognition match, informs the user: `ResultReason::NoMatch`.
82-
* If an error is encountered, prints the error message: `ResultReason::Canceled`.
82+
* If there's no recognition match, it informs the user: `ResultReason::NoMatch`.
83+
* If an error is encountered, it prints the error message: `ResultReason::Canceled`.
8384

8485
```cpp
8586
switch (result->Reason)
@@ -185,19 +186,19 @@ speechRecognizer->StopContinuousRecognitionAsync().get();
185186

186187
## Change the source language
187188

188-
A common task for speech recognition is specifying the input (or source) language. The following example shows how you would change the input language to German. In your code, find your [`SpeechConfig`](/cpp/cognitive-services/speech/speechconfig) instance and add this line directly below it:
189+
A common task for speech recognition is specifying the input (or source) language. The following example shows how to change the input language to German. In your code, find your [`SpeechConfig`](/cpp/cognitive-services/speech/speechconfig) instance and add this line directly below it:
189190

190191
```cpp
191192
speechConfig->SetSpeechRecognitionLanguage("de-DE");
192193
```
193194
194-
[`SetSpeechRecognitionLanguage`](/cpp/cognitive-services/speech/speechconfig#setspeechrecognitionlanguage) is a parameter that takes a string as an argument. Refer to the [list of supported speech to text locales](../../../language-support.md?tabs=stt).
195+
[`SetSpeechRecognitionLanguage`](/cpp/cognitive-services/speech/speechconfig#setspeechrecognitionlanguage) is a parameter that takes a string as an argument. For more information, see the [list of supported speech to text locales](../../../language-support.md?tabs=stt).
195196
196197
## Language identification
197198
198-
You can use [language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-to-text) with Speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
199+
You can use [language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-to-text) with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
199200
200-
For a complete code sample, see [language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-to-text).
201+
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-to-text).
201202
202203
## Use a custom endpoint
203204
@@ -213,5 +214,5 @@ auto speechRecognizer = SpeechRecognizer::FromConfig(speechConfig);
213214

214215
Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and region.
215216

216-
For more information about containers, see the [speech containers](../../../speech-container-howto.md#host-urls) how-to guide.
217+
For more information about containers, see [Host URLs](../../../speech-container-howto.md#host-urls) in Install and run Speech containers with Docker.
217218

0 commit comments

Comments
 (0)