You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/how-to-select-audio-input-devices.md
+19-19Lines changed: 19 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
-
title: How to select an audio input device with the Speech SDK
2
+
title: Select an audio input device with the Speech SDK
3
3
titleSuffix: Azure Cognitive Services
4
-
description: 'Learn about selecting audio input devices in the Speech SDK (C++, C#, Python, Objective-C, Java, JavaScript) by obtaining the IDs of the audio devices connected to a system.'
4
+
description: 'Learn about selecting audio input devices in the Speech SDK (C++, C#, Python, Objective-C, Java, and JavaScript) by obtaining the IDs of the audio devices connected to a system.'
# How to: Select an audio input device with the Speech SDK
17
+
# Select an audio input device with the Speech SDK
18
18
19
-
Version 1.3.0 of the Speech SDK introduces an API to select the audio input. This article describes how to obtain the IDs of the audio devices connected to a system. These can then be used in the Speech SDK by configuring the audio device through the `AudioConfig` object:
19
+
Version 1.3.0 of the Speech SDK introduces an API to select the audio input. This article describes how to obtain the IDs of the audio devices connected to a system. These IDs can then be used in the Speech SDK. You configure the audio device through the `AudioConfig` object:
> Microphone usage is not available for JavaScript running in Node.js
46
+
> Microphone use isn't available for JavaScript running in Node.js.
47
47
48
-
## Audio device IDs on Windows for Desktop applications
48
+
## Audio device IDs on Windows for desktop applications
49
49
50
-
Audio device [endpoint ID strings](/windows/desktop/CoreAudio/endpoint-id-strings) can be retrieved from the [`IMMDevice`](/windows/desktop/api/mmdeviceapi/nn-mmdeviceapi-immdevice) object in Windows for Desktop applications.
50
+
Audio device [endpoint ID strings](/windows/desktop/CoreAudio/endpoint-id-strings) can be retrieved from the [`IMMDevice`](/windows/desktop/api/mmdeviceapi/nn-mmdeviceapi-immdevice) object in Windows for desktop applications.
51
51
52
52
The following code sample illustrates how to use it to enumerate audio devices in C++:
// Print the endpoint friendly name and endpoint ID.
133
133
printf("Endpoint %d: \"%S\" (%S)\n", i, varName.pwszVal, pwszID);
134
134
135
135
CoTaskMemFree(pwszID);
@@ -148,7 +148,7 @@ Exit:
148
148
}
149
149
```
150
150
151
-
In C#, the [NAudio](https://github.com/naudio/NAudio) library can be used to access the CoreAudio API and enumerate devices as follows:
151
+
In C#, you can use the [NAudio](https://github.com/naudio/NAudio) library to access the CoreAudio API and enumerate devices as follows:
152
152
153
153
```cs
154
154
usingSystem;
@@ -176,9 +176,9 @@ A sample device ID is `{0.0.1.00000000}.{5f23ab69-6181-4f4a-81a4-45414013aac8}`.
176
176
177
177
## Audio device IDs on UWP
178
178
179
-
On the Universal Windows Platform (UWP), audio input devices can be obtained using the `Id()` property of the corresponding [`DeviceInformation`](/uwp/api/windows.devices.enumeration.deviceinformation) object.
179
+
On the Universal Windows Platform (UWP), you can obtain audio input devices by using the `Id()` property of the corresponding [`DeviceInformation`](/uwp/api/windows.devices.enumeration.deviceinformation) object.
180
180
181
-
The following code samples show how to do this in C++ and C#:
181
+
The following code samples show how to do this step in C++ and C#:
182
182
183
183
```cpp
184
184
#include<winrt/Windows.Foundation.h>
@@ -227,10 +227,10 @@ A sample device ID is `\\\\?\\SWD#MMDEVAPI#{0.0.1.00000000}.{5f23ab69-6181-4f4a-
227
227
228
228
## Audio device IDs on Linux
229
229
230
-
The device IDs are selected using standard ALSA device IDs.
230
+
The device IDs are selected by using standard ALSA device IDs.
231
231
232
232
The IDs of the inputs attached to the system are contained in the output of the command `arecord -L`.
233
-
Alternatively, they can be obtained using the [ALSA C library](https://www.alsa-project.org/alsa-doc/alsa-lib/).
233
+
Alternatively, they can be obtained by using the [ALSA C library](https://www.alsa-project.org/alsa-doc/alsa-lib/).
234
234
235
235
Sample IDs are `hw:1,0` and `hw:CARD=CC,DEV=0`.
236
236
@@ -366,7 +366,7 @@ For example, the UID for the built-in microphone is `BuiltInMicrophoneDevice`.
366
366
367
367
## Audio device IDs on iOS
368
368
369
-
Audio device selection with the Speech SDK is not supported on iOS. However, apps using the SDK can influence audio routing through the [`AVAudioSession`](https://developer.apple.com/documentation/avfoundation/avaudiosession?language=objc) Framework.
369
+
Audio device selection with the Speech SDK isn't supported on iOS. Apps that use the SDK can influence audio routing through the [`AVAudioSession`](https://developer.apple.com/documentation/avfoundation/avaudiosession?language=objc) Framework.
370
370
371
371
For example, the instruction
372
372
@@ -375,16 +375,16 @@ For example, the instruction
enables the use of a Bluetooth headset for a speech-enabled app.
378
+
Enables the use of a Bluetooth headset for a speech-enabled app.
379
379
380
380
## Audio device IDs in JavaScript
381
381
382
-
In JavaScript the [MediaDevices.enumerateDevices()](https://developer.mozilla.org/docs/Web/API/MediaDevices/enumerateDevices) method can be used to enumerate the media devices and find a device ID to pass to `fromMicrophone(...)`.
382
+
In JavaScript, the [MediaDevices.enumerateDevices()](https://developer.mozilla.org/docs/Web/API/MediaDevices/enumerateDevices) method can be used to enumerate the media devices and find a device ID to pass to `fromMicrophone(...)`.
383
383
384
384
## Next steps
385
385
386
386
> [!div class="nextstepaction"]
387
-
> [Explore our samples on GitHub](https://aka.ms/csspeech/samples)
387
+
> [Explore samples on GitHub](https://aka.ms/csspeech/samples)
title: How to specify source language for speech to text
2
+
title: Specify source language for speech to text
3
3
titleSuffix: Azure Cognitive Services
4
-
description: The Speech SDK allows you to specify the source language when converting speech to text. This article describes how to use the FromConfig and SourceLanguageConfig methods to let the Speech service know the source language and provide a custom model target.
4
+
description: The Speech SDK allows you to specify the source language when you convert speech to text. This article describes how to use the FromConfig and SourceLanguageConfig methods to let the Speech service know the source language and provide a custom model target.
In this article, you'll learn how to specify the source language for an audio input passed to the Speech SDK for speech recognition. Additionally, example code is provided to specify a custom speech model for improved recognition.
20
+
In this article, you'll learn how to specify the source language for an audio input passed to the Speech SDK for speech recognition. The example code that's provided specifies a custom speech model for improved recognition.
21
21
22
22
::: zone pivot="programming-language-csharp"
23
23
24
-
## How to specify source language in C#
24
+
## Specify source language in C#
25
25
26
-
In the following example, the source language is provided explicitly as a parameter using `SpeechRecognizer` construct.
26
+
In the following example, the source language is provided explicitly as a parameter by using the `SpeechRecognizer` construct:
In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
32
+
In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to the`SpeechRecognizer` construct.
In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
39
+
In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to the`SpeechRecognizer` construct.
40
40
41
41
```csharp
42
42
varsourceLanguageConfig=SourceLanguageConfig.FromLanguage("de-DE", "The Endpoint ID for your custom model.");
> `SpeechRecognitionLanguage` and `EndpointId` set methods are deprecated from the `SpeechConfig` class in C#. The use of these methods are discouraged, and shouldn't be used when constructing a `SpeechRecognizer`.
47
+
> The `SpeechRecognitionLanguage` and `EndpointId` set methods are deprecated from the `SpeechConfig` class in C#. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
48
48
49
49
::: zone-end
50
50
51
51
::: zone pivot="programming-language-cpp"
52
52
53
+
## Specify source language in C++
53
54
54
-
## How to specify source language in C++
55
-
56
-
In the following example, the source language is provided explicitly as a parameter using the `FromConfig` method.
55
+
In the following example, the source language is provided explicitly as a parameter by using the `FromConfig` method.
57
56
58
57
```C++
59
58
auto recognizer = SpeechRecognizer::FromConfig(speechConfig, "de-DE", audioConfig);
60
59
```
61
60
62
-
In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter to `FromConfig` when creating the `recognizer`.
61
+
In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter to `FromConfig` when you create the `recognizer` construct.
63
62
64
63
```C++
65
64
auto sourceLanguageConfig = SourceLanguageConfig::FromLanguage("de-DE");
66
65
auto recognizer = SpeechRecognizer::FromConfig(speechConfig, sourceLanguageConfig, audioConfig);
67
66
```
68
67
69
-
In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. The`sourceLanguageConfig` is passed as a parameter to `FromConfig` when creating the `recognizer`.
68
+
In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then,`sourceLanguageConfig` is passed as a parameter to `FromConfig` when you create the `recognizer` construct.
70
69
71
70
```C++
72
71
auto sourceLanguageConfig = SourceLanguageConfig::FromLanguage("de-DE", "The Endpoint ID for your custom model.");
73
72
auto recognizer = SpeechRecognizer::FromConfig(speechConfig, sourceLanguageConfig, audioConfig);
74
73
```
75
74
76
75
>[!Note]
77
-
> `SetSpeechRecognitionLanguage` and `SetEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods are discouraged, and shouldn't be used when constructing a `SpeechRecognizer`.
76
+
> `SetSpeechRecognitionLanguage` and `SetEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
78
77
79
78
::: zone-end
80
79
81
80
::: zone pivot="programming-language-java"
82
81
83
-
## How to specify source language in Java
82
+
## Specify source language in Java
84
83
85
-
In the following example, the source language is provided explicitly when creating a new `SpeechRecognizer`.
84
+
In the following example, the source language is provided explicitly when you create a new `SpeechRecognizer` construct.
In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter when creating a new `SpeechRecognizer`.
90
+
In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter when you create a new `SpeechRecognizer` construct.
In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. Then, the `sourceLanguageConfig` is passed as a parameter when creating a new `SpeechRecognizer`.
97
+
In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `sourceLanguageConfig` is passed as a parameter when you create a new `SpeechRecognizer` construct.
99
98
100
99
```Java
101
100
SourceLanguageConfig sourceLanguageConfig =SourceLanguageConfig.fromLanguage("de-DE", "The Endpoint ID for your custom model.");
> `setSpeechRecognitionLanguage` and `setEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods are discouraged, and shouldn't be used when constructing a `SpeechRecognizer`.
105
+
> `setSpeechRecognitionLanguage` and `setEndpointId` are deprecated methods from the `SpeechConfig` class in C++ and Java. The use of these methods is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
107
106
108
107
::: zone-end
109
108
110
109
::: zone pivot="programming-language-python"
111
110
112
-
## How to specify source language in Python
111
+
## Specify source language in Python
113
112
114
-
In the following example, the source language is provided explicitly as a parameter using `SpeechRecognizer` construct.
113
+
In the following example, the source language is provided explicitly as a parameter by using the`SpeechRecognizer` construct.
In the following example, the source language is provided using `SourceLanguageConfig`. Then, the `SourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
120
+
In the following example, the source language is provided by using `SourceLanguageConfig`. Then, `SourceLanguageConfig` is passed as a parameter to the`SpeechRecognizer` construct.
In the following example, the source language and custom endpoint are provided using `SourceLanguageConfig`. Then, the `SourceLanguageConfig` is passed as a parameter to `SpeechRecognizer` construct.
128
+
In the following example, the source language and custom endpoint are provided by using `SourceLanguageConfig`. Then, `SourceLanguageConfig` is passed as a parameter to the`SpeechRecognizer` construct.
130
129
131
130
```Python
132
131
source_language_config = speechsdk.languageconfig.SourceLanguageConfig("de-DE", "The Endpoint ID for your custom model.")
> `speech_recognition_language` and `endpoint_id` properties are deprecated from the `SpeechConfig` class in Python. The use of these properties is discouraged, and they shouldn't be used when constructing a `SpeechRecognizer`.
137
+
> The `speech_recognition_language` and `endpoint_id` properties are deprecated from the `SpeechConfig` class in Python. The use of these properties is discouraged. Don't use them when you create a `SpeechRecognizer` construct.
139
138
140
139
::: zone-end
141
140
142
141
::: zone pivot="programming-language-more"
143
142
144
-
## How to specify source language in Javascript
143
+
## Specify source language in JavaScript
145
144
146
-
The first step is to create a `SpeechConfig`:
145
+
The first step is to create a `SpeechConfig` construct:
147
146
148
147
```Javascript
149
148
var speechConfig =sdk.SpeechConfig.fromSubscription("YourSubscriptionkey", "YourRegion");
@@ -161,16 +160,16 @@ If you're using a custom model for recognition, you can specify the endpoint wit
161
160
speechConfig.endpointId="The Endpoint ID for your custom model.";
162
161
```
163
162
164
-
## How to specify source language in Objective-C
163
+
## Specify source language in Objective-C
165
164
166
-
In the following example, the source language is provided explicitly as a parameter using `SPXSpeechRecognizer` construct.
165
+
In the following example, the source language is provided explicitly as a parameter by using the`SPXSpeechRecognizer` construct.
In the following example, the source language is provided using `SPXSourceLanguageConfiguration`. Then, the `SPXSourceLanguageConfiguration` is passed as a parameter to `SPXSpeechRecognizer` construct.
172
+
In the following example, the source language is provided by using `SPXSourceLanguageConfiguration`. Then, `SPXSourceLanguageConfiguration` is passed as a parameter to the `SPXSpeechRecognizer` construct.
In the following example, the source language and custom endpoint are provided using `SPXSourceLanguageConfiguration`. Then, the `SPXSourceLanguageConfiguration` is passed as a parameter to `SPXSpeechRecognizer` construct.
181
+
In the following example, the source language and custom endpoint are provided by using `SPXSourceLanguageConfiguration`. Then, `SPXSourceLanguageConfiguration` is passed as a parameter to the`SPXSpeechRecognizer` construct.
> `speechRecognitionLanguage` and `endpointId` properties are deprecated from the `SPXSpeechConfiguration` class in Objective-C. The use of these properties is discouraged, and they shouldn't be used when constructing a `SPXSpeechRecognizer`.
193
+
> The `speechRecognitionLanguage` and `endpointId` properties are deprecated from the `SPXSpeechConfiguration` class in Objective-C. The use of these properties is discouraged. Don't use them when you create a `SPXSpeechRecognizer` construct.
195
194
196
195
::: zone-end
197
196
198
197
## See also
199
198
200
-
* For a list of supported languages and locales for speech to text, see [Language support](language-support.md).
199
+
For a list of supported languages and locales for speech-to-text, see [Language support](language-support.md).
0 commit comments