You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,8 +33,8 @@ To open the settings for an Azure AD role:
33
33
34
34
1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure AD roles** > **Roles**.
35
35
36
-
1.This page shows a list of Azure AD roles available in the tenant, including built-in and custom roles.
37
-
:::image type="content" source="media/pim-how-to-change-default-settings/role-settings.png" alt-text="Screenshot that shows the list of Azure AD roles available in the tenant, including built-in and custom roles." lightbox="media/pim-how-to-change-default-settings/role-settings.png":::
36
+
1.On this page you see a list of Azure AD roles available in the tenant, including built-in and custom roles.
37
+
:::image type="content" source="media/pim-how-to-change-default-settings/role-settings.png" alt-text="Screenshot that shows the list of Azure AD roles available in the tenant, including built-in and custom roles." lightbox="media/pim-how-to-change-default-settings/role-settings.png":::
38
38
39
39
1. Select the role whose settings you want to configure.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/how-to/recognize-speech/cli.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,13 +2,13 @@
2
2
author: eric-urban
3
3
ms.service: cognitive-services
4
4
ms.topic: include
5
-
ms.date: 09/08/2020
5
+
ms.date: 09/01/2023
6
6
ms.author: eur
7
7
---
8
8
9
9
[!INCLUDE [Introduction](intro.md)]
10
10
11
-
## Speech to text from a microphone
11
+
## Recognize speech from a microphone
12
12
13
13
Plug in and turn on your PC microphone. Turn off any apps that might also use the microphone. Some computers have a built-in microphone, whereas others require configuration of a Bluetooth device.
14
14
@@ -21,11 +21,11 @@ spx recognize --microphone
21
21
> [!NOTE]
22
22
> The Speech CLI defaults to English. You can choose a different language [from the speech to text table](../../../../language-support.md?tabs=stt). For example, add `--source de-DE` to recognize German speech.
23
23
24
-
Speak into the microphone, and you see transcription of your words into text in real-time. The Speech CLI stops after a period of silence, or when you select Ctrl+C.
24
+
Speak into the microphone, and you can see transcription of your words into text in real-time. The Speech CLI stops after a period of silence, or when you select **Ctrl+C**.
25
25
26
-
## Speech to text from an audio file
26
+
## Recognize speech from a file
27
27
28
-
The Speech CLI can recognize speech in many file formats and natural languages. In this example, you can use any WAV file (16 KHz or 8 KHz, 16-bit, and mono PCM) that contains English speech. Or if you want a quick sample, download the <ahref="https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/whatstheweatherlike.wav"download="whatstheweatherlike"target="_blank">whatstheweatherlike.wav <spanclass="docon docon-download x-hidden-focus"></span></a> file and copy it to the same directory as the Speech CLI binary file.
28
+
The Speech CLI can recognize speech in many file formats and natural languages. In this example, you can use any *.wav* file (16 KHz or 8 KHz, 16-bit, and mono PCM) that contains English speech. Or if you want a quick sample, download the <ahref="https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/whatstheweatherlike.wav"download="whatstheweatherlike"target="_blank">whatstheweatherlike.wav <spanclass="docon docon-download x-hidden-focus"></span></a> file, and copy it to the same directory as the Speech CLI binary file.
29
29
30
30
Use the following command to run the Speech CLI to recognize speech found in the audio file:
31
31
@@ -42,5 +42,5 @@ The Speech CLI shows a text transcription of the speech on the screen.
42
42
43
43
Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and region.
44
44
45
-
For more information about containers, see the [speech containers](../../../speech-container-howto.md#host-urls)how-to guide.
45
+
For more information about containers, see [Host URLs](../../../speech-container-howto.md#host-urls)in Install and run Speech containers with Docker.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/how-to/recognize-speech/cpp.md
+17-16Lines changed: 17 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
author: eric-urban
3
3
ms.service: cognitive-services
4
4
ms.topic: include
5
-
ms.date: 03/06/2020
5
+
ms.date: 09/01/2023
6
6
ms.author: eur
7
7
---
8
8
@@ -14,7 +14,8 @@ ms.author: eur
14
14
15
15
To call the Speech service using the Speech SDK, you need to create a [`SpeechConfig`](/cpp/cognitive-services/speech/speechconfig) instance. This class includes information about your subscription, like your key and associated location/region, endpoint, host, or authorization token.
16
16
17
-
Create a `SpeechConfig` instance by using your key and region. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a multi-service resource](~/articles/ai-services/multi-service-resource.md?pivots=azportal).
17
+
1. Create a `SpeechConfig` instance by using your key and region.
18
+
1. Create a Speech resource on the [Azure portal](https://portal.azure.com). For more information, see [Create a multi-service resource](~/articles/ai-services/multi-service-resource.md?pivots=azportal).
18
19
19
20
```cpp
20
21
usingnamespacestd;
@@ -25,16 +26,16 @@ auto speechConfig = SpeechConfig::FromSubscription("YourSpeechKey", "YourSpeechR
25
26
26
27
You can initialize `SpeechConfig` in a few other ways:
27
28
28
-
* With an endpoint: pass in a Speech service endpoint. A key or authorization token is optional.
29
-
* With a host: pass in a host address. A key or authorization token is optional.
30
-
* With an authorization token: pass in an authorization token and the associated region.
29
+
* Use an endpoint, and pass in a Speech service endpoint. A key or authorization token is optional.
30
+
* Use a host, and pass in a host address. A key or authorization token is optional.
31
+
* Use an authorization token with the associated region/location.
31
32
32
33
> [!NOTE]
33
-
> Regardless of whether you're performing speech recognition, speech synthesis, translation, or intent recognition, you'll always create a configuration.
34
+
> Regardless of whether you're performing speech recognition, speech synthesis, translation, or intent recognition, you always create a configuration.
34
35
35
36
## Recognize speech from a microphone
36
37
37
-
To recognize speech by using your device microphone, create an [`AudioConfig`](/cpp/cognitive-services/speech/audio-audioconfig) instance by using `FromDefaultMicrophoneInput()`. Then initialize [`SpeechRecognizer`](/cpp/cognitive-services/speech/speechrecognizer) by passing `audioConfig` and `config`.
38
+
To recognize speech by using your device microphone, create an [`AudioConfig`](/cpp/cognitive-services/speech/audio-audioconfig) instance by using the `FromDefaultMicrophoneInput()` member function. Then initialize the[`SpeechRecognizer`](/cpp/cognitive-services/speech/speechrecognizer) object by passing `audioConfig` and `config`.
38
39
39
40
```cpp
40
41
using namespace Microsoft::CognitiveServices::Speech::Audio;
@@ -47,11 +48,11 @@ auto result = speechRecognizer->RecognizeOnceAsync().get();
If you want to use a *specific* audio input device, you need to specify the device ID in `AudioConfig`. Learn [how to get the device ID](../../../how-to-select-audio-input-devices.md) for your audio input device.
51
+
If you want to use a *specific* audio input device, you need to specify the device ID in `AudioConfig`. For more information on how to get the device ID for your audio input device, see [Select an audio input device with the Speech SDK](../../../how-to-select-audio-input-devices.md)
51
52
52
53
## Recognize speech from a file
53
54
54
-
If you want to recognize speech from an audio file instead of using a microphone, you still need to create an `AudioConfig` instance. But instead of calling `FromDefaultMicrophoneInput()`, you call `FromWavFileInput()` and pass the file path:
55
+
If you want to recognize speech from an audio file instead of using a microphone, you still need to create an `AudioConfig` instance. But for this case you don't call `FromDefaultMicrophoneInput()`. You call `FromWavFileInput()` and pass the file path:
@@ -78,8 +79,8 @@ auto result = speechRecognizer->RecognizeOnceAsync().get();
78
79
You need to write some code to handle the result. This sample evaluates [`result->Reason`](/cpp/cognitive-services/speech/recognitionresult#reason) and:
79
80
80
81
* Prints the recognition result: `ResultReason::RecognizedSpeech`.
81
-
* If there is no recognition match, informs the user: `ResultReason::NoMatch`.
82
-
* If an error is encountered, prints the error message: `ResultReason::Canceled`.
82
+
* If there's no recognition match, it informs the user: `ResultReason::NoMatch`.
83
+
* If an error is encountered, it prints the error message: `ResultReason::Canceled`.
A common task for speech recognition is specifying the input (or source) language. The following example shows how you would change the input language to German. In your code, find your [`SpeechConfig`](/cpp/cognitive-services/speech/speechconfig) instance and add this line directly below it:
189
+
A common task for speech recognition is specifying the input (or source) language. The following example shows how to change the input language to German. In your code, find your [`SpeechConfig`](/cpp/cognitive-services/speech/speechconfig) instance and add this line directly below it:
[`SetSpeechRecognitionLanguage`](/cpp/cognitive-services/speech/speechconfig#setspeechrecognitionlanguage) is a parameter that takes a string as an argument. Refer to the [list of supported speech to text locales](../../../language-support.md?tabs=stt).
195
+
[`SetSpeechRecognitionLanguage`](/cpp/cognitive-services/speech/speechconfig#setspeechrecognitionlanguage) is a parameter that takes a string as an argument. For more information, see the [list of supported speech to text locales](../../../language-support.md?tabs=stt).
195
196
196
197
## Language identification
197
198
198
-
You can use [language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-to-text) with Speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
199
+
You can use [language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-to-text) with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
199
200
200
-
For a complete code sample, see [language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-to-text).
201
+
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-to-text).
201
202
202
203
## Use a custom endpoint
203
204
@@ -213,5 +214,5 @@ auto speechRecognizer = SpeechRecognizer::FromConfig(speechConfig);
213
214
214
215
Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and region.
215
216
216
-
For more information about containers, see the [speech containers](../../../speech-container-howto.md#host-urls)how-to guide.
217
+
For more information about containers, see [Host URLs](../../../speech-container-howto.md#host-urls)in Install and run Speech containers with Docker.
0 commit comments