Skip to content

Commit 9c5f129

Browse files
authored
Merge pull request #186945 from paulth1/overview-and-three-how-tos
edit pass: overview-and-three-how-tos
2 parents 58cda24 + 5dcb73d commit 9c5f129

File tree

4 files changed

+122
-114
lines changed

4 files changed

+122
-114
lines changed

articles/cognitive-services/Speech-Service/how-to-select-audio-input-devices.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
2-
title: How to select an audio input device with the Speech SDK
2+
title: Select an audio input device with the Speech SDK
33
titleSuffix: Azure Cognitive Services
4-
description: 'Learn about selecting audio input devices in the Speech SDK (C++, C#, Python, Objective-C, Java, JavaScript) by obtaining the IDs of the audio devices connected to a system.'
4+
description: 'Learn about selecting audio input devices in the Speech SDK (C++, C#, Python, Objective-C, Java, and JavaScript) by obtaining the IDs of the audio devices connected to a system.'
55
services: cognitive-services
66
author: chlandsi
77
manager: nitinme
@@ -14,9 +14,9 @@ ms.devlang: cpp, csharp, java, javascript, objective-c, python
1414
ms.custom: devx-track-js, ignite-fall-2021
1515
---
1616

17-
# How to: Select an audio input device with the Speech SDK
17+
# Select an audio input device with the Speech SDK
1818

19-
Version 1.3.0 of the Speech SDK introduces an API to select the audio input. This article describes how to obtain the IDs of the audio devices connected to a system. These can then be used in the Speech SDK by configuring the audio device through the `AudioConfig` object:
19+
Version 1.3.0 of the Speech SDK introduces an API to select the audio input. This article describes how to obtain the IDs of the audio devices connected to a system. These IDs can then be used in the Speech SDK. You configure the audio device through the `AudioConfig` object:
2020

2121
```C++
2222
audioConfig = AudioConfig.FromMicrophoneInput("<device id>");
@@ -43,11 +43,11 @@ audioConfig = AudioConfiguration.fromMicrophoneInput("<device id>");
4343
```
4444

4545
> [!Note]
46-
> Microphone usage is not available for JavaScript running in Node.js
46+
> Microphone use isn't available for JavaScript running in Node.js.
4747
48-
## Audio device IDs on Windows for Desktop applications
48+
## Audio device IDs on Windows for desktop applications
4949

50-
Audio device [endpoint ID strings](/windows/desktop/CoreAudio/endpoint-id-strings) can be retrieved from the [`IMMDevice`](/windows/desktop/api/mmdeviceapi/nn-mmdeviceapi-immdevice) object in Windows for Desktop applications.
50+
Audio device [endpoint ID strings](/windows/desktop/CoreAudio/endpoint-id-strings) can be retrieved from the [`IMMDevice`](/windows/desktop/api/mmdeviceapi/nn-mmdeviceapi-immdevice) object in Windows for desktop applications.
5151

5252
The following code sample illustrates how to use it to enumerate audio devices in C++:
5353

@@ -110,7 +110,7 @@ void ListEndpoints()
110110
PROPVARIANT varName;
111111
for (ULONG i = 0; i < count; i++)
112112
{
113-
// Get pointer to endpoint number i.
113+
// Get the pointer to endpoint number i.
114114
hr = pCollection->Item(i, &pEndpoint);
115115
EXIT_ON_ERROR(hr);
116116

@@ -122,14 +122,14 @@ void ListEndpoints()
122122
STGM_READ, &pProps);
123123
EXIT_ON_ERROR(hr);
124124

125-
// Initialize container for property value.
125+
// Initialize the container for property value.
126126
PropVariantInit(&varName);
127127

128128
// Get the endpoint's friendly-name property.
129129
hr = pProps->GetValue(PKEY_Device_FriendlyName, &varName);
130130
EXIT_ON_ERROR(hr);
131131

132-
// Print endpoint friendly name and endpoint ID.
132+
// Print the endpoint friendly name and endpoint ID.
133133
printf("Endpoint %d: \"%S\" (%S)\n", i, varName.pwszVal, pwszID);
134134

135135
CoTaskMemFree(pwszID);
@@ -148,7 +148,7 @@ Exit:
148148
}
149149
```
150150

151-
In C#, the [NAudio](https://github.com/naudio/NAudio) library can be used to access the CoreAudio API and enumerate devices as follows:
151+
In C#, you can use the [NAudio](https://github.com/naudio/NAudio) library to access the CoreAudio API and enumerate devices as follows:
152152

153153
```cs
154154
using System;
@@ -176,9 +176,9 @@ A sample device ID is `{0.0.1.00000000}.{5f23ab69-6181-4f4a-81a4-45414013aac8}`.
176176

177177
## Audio device IDs on UWP
178178

179-
On the Universal Windows Platform (UWP), audio input devices can be obtained using the `Id()` property of the corresponding [`DeviceInformation`](/uwp/api/windows.devices.enumeration.deviceinformation) object.
179+
On the Universal Windows Platform (UWP), you can obtain audio input devices by using the `Id()` property of the corresponding [`DeviceInformation`](/uwp/api/windows.devices.enumeration.deviceinformation) object.
180180

181-
The following code samples show how to do this in C++ and C#:
181+
The following code samples show how to do this step in C++ and C#:
182182

183183
```cpp
184184
#include <winrt/Windows.Foundation.h>
@@ -227,10 +227,10 @@ A sample device ID is `\\\\?\\SWD#MMDEVAPI#{0.0.1.00000000}.{5f23ab69-6181-4f4a-
227227

228228
## Audio device IDs on Linux
229229

230-
The device IDs are selected using standard ALSA device IDs.
230+
The device IDs are selected by using standard ALSA device IDs.
231231

232232
The IDs of the inputs attached to the system are contained in the output of the command `arecord -L`.
233-
Alternatively, they can be obtained using the [ALSA C library](https://www.alsa-project.org/alsa-doc/alsa-lib/).
233+
Alternatively, they can be obtained by using the [ALSA C library](https://www.alsa-project.org/alsa-doc/alsa-lib/).
234234

235235
Sample IDs are `hw:1,0` and `hw:CARD=CC,DEV=0`.
236236

@@ -366,7 +366,7 @@ For example, the UID for the built-in microphone is `BuiltInMicrophoneDevice`.
366366

367367
## Audio device IDs on iOS
368368

369-
Audio device selection with the Speech SDK is not supported on iOS. However, apps using the SDK can influence audio routing through the [`AVAudioSession`](https://developer.apple.com/documentation/avfoundation/avaudiosession?language=objc) Framework.
369+
Audio device selection with the Speech SDK isn't supported on iOS. Apps that use the SDK can influence audio routing through the [`AVAudioSession`](https://developer.apple.com/documentation/avfoundation/avaudiosession?language=objc) Framework.
370370

371371
For example, the instruction
372372

@@ -375,16 +375,16 @@ For example, the instruction
375375
withOptions:AVAudioSessionCategoryOptionAllowBluetooth error:NULL];
376376
```
377377
378-
enables the use of a Bluetooth headset for a speech-enabled app.
378+
Enables the use of a Bluetooth headset for a speech-enabled app.
379379
380380
## Audio device IDs in JavaScript
381381
382-
In JavaScript the [MediaDevices.enumerateDevices()](https://developer.mozilla.org/docs/Web/API/MediaDevices/enumerateDevices) method can be used to enumerate the media devices and find a device ID to pass to `fromMicrophone(...)`.
382+
In JavaScript, the [MediaDevices.enumerateDevices()](https://developer.mozilla.org/docs/Web/API/MediaDevices/enumerateDevices) method can be used to enumerate the media devices and find a device ID to pass to `fromMicrophone(...)`.
383383
384384
## Next steps
385385
386386
> [!div class="nextstepaction"]
387-
> [Explore our samples on GitHub](https://aka.ms/csspeech/samples)
387+
> [Explore samples on GitHub](https://aka.ms/csspeech/samples)
388388
389389
## See also
390390
Lines changed: 33 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
2-
title: How to specify source language for speech to text
2+
title: Specify source language for speech to text
33
titleSuffix: Azure Cognitive Services
4-
description: The Speech SDK allows you to specify the source language when converting speech to text. This article describes how to use the FromConfig and SourceLanguageConfig methods to let the Speech service know the source language and provide a custom model target.
4+
description: The Speech SDK allows you to specify the source language when you convert speech to text. This article describes how to use the FromConfig and SourceLanguageConfig methods to let the Speech service know the source language and provide a custom model target.
55
services: cognitive-services
66
author: susanhu
77
manager: nitinme
@@ -15,118 +15,117 @@ ms.devlang: cpp, csharp, java, javascript, objective-c, python
1515
ms.custom: "devx-track-js, devx-track-csharp"
1616
---
1717

18-
# Specify source language for speech to text
18+
# Specify source language for speech-to-text
1919

20-
In this article, you'll learn how to specify the source language for an audio input passed to the Speech SDK for speech recognition. Additionally, example code is provided to specify a custom speech model for improved recognition.
20+
In this article, you'll learn how to specify the source language for an audio input passed to the Speech SDK for speech recognition. The example code that's provided specifies a custom speech model for improved recognition.
2121

2222
::: zone pivot="programming-language-csharp"
2323

24-
## How to specify source language in C#
24+
## Specify source language in C#
2525

26-
In the following example, the source language is provided explicitly as a parameter using `SpeechRecognizer` construct.
26+
In the following example, the source language is provided explicitly as a parameter by using the `SpeechRecognizer` construct:
2727

2828
```csharp
2929
var recognizer = new SpeechRecognizer(speechConfig, "de-DE", audioConfig);
3030
```
3131

32-
In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
32+
In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
3333

3434
```csharp
3535
var sourceLanguageConfig = SourceLanguageConfig.FromLanguage("de-DE");
3636
var recognizer = new SpeechRecognizer(speechConfig, sourceLanguageConfig, audioConfig);
3737
```
3838

39-
In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
39+
In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
4040

4141
```csharp
4242
var sourceLanguageConfig = SourceLanguageConfig.FromLanguage("de-DE", "The Endpoint ID for your custom model.");
4343
var recognizer = new SpeechRecognizer(speechConfig, sourceLanguageConfig, audioConfig);
4444
```
4545

4646
>[!Note]
47-
> `SpeechRecognitionLanguage` and `EndpointId` set methods are deprecated from the `SpeechConfig` class in C#. The use of these methods are discouraged, and shouldn't be used when constructing a `SpeechRecognizer`.
47+
> The `SpeechRecognitionLanguage` and `EndpointId` set methods are deprecated from the `SpeechConfig` class in C#. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
4848
4949
::: zone-end
5050

5151
::: zone pivot="programming-language-cpp"
5252

53+
## Specify source language in C++
5354

54-
## How to specify source language in C++
55-
56-
In the following example, the source language is provided explicitly as a parameter using the `FromConfig` method.
55+
In the following example, the source language is provided explicitly as a parameter by using the `FromConfig` method.
5756

5857
```C++
5958
auto recognizer = SpeechRecognizer::FromConfig(speechConfig, "de-DE", audioConfig);
6059
```
6160

62-
In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter to `FromConfig` when creating the `recognizer`.
61+
In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to `FromConfig` when you create the `recognizer` construct.
6362

6463
```C++
6564
auto sourceLanguageConfig = SourceLanguageConfig::FromLanguage("de-DE");
6665
auto recognizer = SpeechRecognizer::FromConfig(speechConfig, sourceLanguageConfig, audioConfig);
6766
```
6867

69-
In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. The `sourceLanguageConfig` is passed as a parameter to `FromConfig` when creating the `recognizer`.
68+
In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to `FromConfig` when you create the `recognizer` construct.
7069

7170
```C++
7271
auto sourceLanguageConfig = SourceLanguageConfig::FromLanguage("de-DE", "The Endpoint ID for your custom model.");
7372
auto recognizer = SpeechRecognizer::FromConfig(speechConfig, sourceLanguageConfig, audioConfig);
7473
```
7574

7675
>[!Note]
77-
> `SetSpeechRecognitionLanguage` and `SetEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods are discouraged, and shouldn't be used when constructing a `SpeechRecognizer`.
76+
> `SetSpeechRecognitionLanguage` and `SetEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
7877
7978
::: zone-end
8079

8180
::: zone pivot="programming-language-java"
8281

83-
## How to specify source language in Java
82+
## Specify source language in Java
8483

85-
In the following example, the source language is provided explicitly when creating a new `SpeechRecognizer`.
84+
In the following example, the source language is provided explicitly when you create a new `SpeechRecognizer` construct.
8685

8786
```Java
8887
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, "de-DE", audioConfig);
8988
```
9089

91-
In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter when creating a new `SpeechRecognizer`.
90+
In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter when you create a new `SpeechRecognizer` construct.
9291

9392
```Java
9493
SourceLanguageConfig sourceLanguageConfig = SourceLanguageConfig.fromLanguage("de-DE");
9594
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, sourceLanguageConfig, audioConfig);
9695
```
9796

98-
In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter when creating a new `SpeechRecognizer`.
97+
In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter when you create a new `SpeechRecognizer` construct.
9998

10099
```Java
101100
SourceLanguageConfig sourceLanguageConfig = SourceLanguageConfig.fromLanguage("de-DE", "The Endpoint ID for your custom model.");
102101
SpeechRecognizer recognizer = new SpeechRecognizer(speechConfig, sourceLanguageConfig, audioConfig);
103102
```
104103

105104
>[!Note]
106-
> `setSpeechRecognitionLanguage` and `setEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods are discouraged, and shouldn't be used when constructing a `SpeechRecognizer`.
105+
> `setSpeechRecognitionLanguage` and `setEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
107106
108107
::: zone-end
109108

110109
::: zone pivot="programming-language-python"
111110

112-
## How to specify source language in Python
111+
## Specify source language in Python
113112

114-
In the following example, the source language is provided explicitly as a parameter using `SpeechRecognizer` construct.
113+
In the following example, the source language is provided explicitly as a parameter by using the `SpeechRecognizer` construct.
115114

116115
```Python
117116
speech_recognizer = speechsdk.SpeechRecognizer(
118117
speech_config=speech_config, language="de-DE", audio_config=audio_config)
119118
```
120119

121-
In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `SourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
120+
In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `SourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
122121

123122
```Python
124123
source_language_config = speechsdk.languageconfig.SourceLanguageConfig("de-DE")
125124
speech_recognizer = speechsdk.SpeechRecognizer(
126125
speech_config=speech_config, source_language_config=source_language_config, audio_config=audio_config)
127126
```
128127

129-
In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. Then, the `SourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
128+
In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `SourceLanguageConfig` is passed as a parameter to the `SpeechRecognizer` construct.
130129

131130
```Python
132131
source_language_config = speechsdk.languageconfig.SourceLanguageConfig("de-DE", "The Endpoint ID for your custom model.")
@@ -135,15 +134,15 @@ speech_recognizer = speechsdk.SpeechRecognizer(
135134
```
136135

137136
>[!Note]
138-
> `speech_recognition_language` and `endpoint_id` properties are deprecated from the `SpeechConfig` class in Python. The use of these properties is discouraged, and they shouldn't be used when constructing a `SpeechRecognizer`.
137+
> The `speech_recognition_language` and `endpoint_id` properties are deprecated from the `SpeechConfig` class in Python. The use of these properties is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
139138
140139
::: zone-end
141140

142141
::: zone pivot="programming-language-more"
143142

144-
## How to specify source language in Javascript
143+
## Specify source language in JavaScript
145144

146-
The first step is to create a `SpeechConfig`:
145+
The first step is to create a `SpeechConfig` construct:
147146

148147
```Javascript
149148
var speechConfig = sdk.SpeechConfig.fromSubscription("YourSubscriptionkey", "YourRegion");
@@ -161,16 +160,16 @@ If you're using a custom model for recognition, you can specify the endpoint wit
161160
speechConfig.endpointId = "The Endpoint ID for your custom model.";
162161
```
163162

164-
## How to specify source language in Objective-C
163+
## Specify source language in Objective-C
165164

166-
In the following example, the source language is provided explicitly as a parameter using `SPXSpeechRecognizer` construct.
165+
In the following example, the source language is provided explicitly as a parameter by using the `SPXSpeechRecognizer` construct.
167166

168167
```Objective-C
169168
SPXSpeechRecognizer* speechRecognizer = \
170169
[[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig language:@"de-DE" audioConfiguration:audioConfig];
171170
```
172171
173-
In the following example, the source language is provided using `SPXSourceLanguageConfiguration`. Then, the `SPXSourceLanguageConfiguration` is passed as a parameter to `SPXSpeechRecognizer` construct.
172+
In the following example, the source language is provided by using `SPXSourceLanguageConfiguration`. Then, `SPXSourceLanguageConfiguration` is passed as a parameter to the `SPXSpeechRecognizer` construct.
174173
175174
```Objective-C
176175
SPXSourceLanguageConfiguration* sourceLanguageConfig = [[SPXSourceLanguageConfiguration alloc]init:@"de-DE"];
@@ -179,7 +178,7 @@ SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] initWithSpe
179178
audioConfiguration:audioConfig];
180179
```
181180

182-
In the following example, the source language and custom endpoint are provided using `SPXSourceLanguageConfiguration`. Then, the `SPXSourceLanguageConfiguration` is passed as a parameter to `SPXSpeechRecognizer` construct.
181+
In the following example, the source language and custom endpoint are provided by using `SPXSourceLanguageConfiguration`. Then, `SPXSourceLanguageConfiguration` is passed as a parameter to the `SPXSpeechRecognizer` construct.
183182

184183
```Objective-C
185184
SPXSourceLanguageConfiguration* sourceLanguageConfig = \
@@ -191,14 +190,14 @@ SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] initWithSpe
191190
```
192191
193192
>[!Note]
194-
> `speechRecognitionLanguage` and `endpointId` properties are deprecated from the `SPXSpeechConfiguration` class in Objective-C. The use of these properties is discouraged, and they shouldn't be used when constructing a `SPXSpeechRecognizer`.
193+
> The `speechRecognitionLanguage` and `endpointId` properties are deprecated from the `SPXSpeechConfiguration` class in Objective-C. The use of these properties is discouraged. Don't use them when you create a `SPXSpeechRecognizer` construct.
195194
196195
::: zone-end
197196
198197
## See also
199198
200-
* For a list of supported languages and locales for speech to text, see [Language support](language-support.md).
199+
For a list of supported languages and locales for speech-to-text, see [Language support](language-support.md).
201200
202201
## Next steps
203202
204-
* [Speech SDK reference documentation](speech-sdk.md)
203+
See the [Speech SDK reference documentation](speech-sdk.md).

0 commit comments

Comments
 (0)