You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/how-to-devices-microphone-array-configuration.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: How to configure a Microphone Array - Speech Service
3
3
titleSuffix: Azure Cognitive Services
4
-
description: Learn how to configure a Microphone Array so that the Speech Devices SDK can utilize it.
4
+
description: Learn how to configure a Microphone Array so that the Speech Devices SDK can use it.
5
5
services: cognitive-services
6
6
author: mswellsi
7
7
manager: yanbo
@@ -14,9 +14,9 @@ ms.author: wellsi
14
14
15
15
# How to configure a Microphone Array
16
16
17
-
In this article, you learn how to configure a [microphone array](https://aka.ms/sdsdk-microphone), including setting the working angle, and how to select which microphone is used for the Speech Devices SDK.
17
+
In this article, you learn how to configure a [microphone array](https://aka.ms/sdsdk-microphone). It includes setting the working angle, and how to select which microphone is used for the Speech Devices SDK.
18
18
19
-
The Speech Devices SDK works best with a microphone array that has been designed according to [our guidelines](https://aka.ms/sdsdk-microphone), and either exposes the microphone array configuration or the configuration is supplied through one of the following methods.
19
+
The Speech Devices SDK works best with a microphone array that has been designed according to [our guidelines](https://aka.ms/sdsdk-microphone). The microphone array configuration can be provided by the Operating System or supplied through one of the following methods.
20
20
21
21
The Speech Devices SDK initially supported microphone arrays by selecting from a fixed set of configurations.
On Windows the microphone array configuration is supplied by the audio driver.
28
+
On Windows, the microphone array configuration is supplied by the audio driver.
29
29
30
-
From v1.11.0 The Speech Devices SDK also supports configuration from a [JSON file](https://aka.ms/sdsdk-micarray-json).
30
+
From v1.11.0, the Speech Devices SDK also supports configuration from a [JSON file](https://aka.ms/sdsdk-micarray-json).
31
31
32
32
33
33
## Windows
34
-
On Windows the microphone array geometry information is automatically obtained from the audio driver. So, the properties `DeviceGeometry` and`SelectedGeometry` as well as the new property`MicArrayGeometryConfigFile` are optional. We use the [JSON file](https://aka.ms/sdsdk-micarray-json) provided using `MicArrayGeometryConfigFile` for only getting the beamforming range.
34
+
On Windows, the microphone array geometry information is automatically obtained from the audio driver. So, the properties `DeviceGeometry`, `SelectedGeometry`, and`MicArrayGeometryConfigFile` are optional. We use the [JSON file](https://aka.ms/sdsdk-micarray-json) provided using `MicArrayGeometryConfigFile` for only getting the beamforming range.
35
35
36
-
If a microphone array is specified using `AudioConfig::FromMicrophoneInput`, then we use the specified microphone. If microphone is not specified or `AudioConfig::FromDefaultMicrophoneInput` is called, then we use the default microphone which is specified in Sound settings on Windows.
37
-
The Microsoft Audio Stack in the Speech Devices SDK only supports down sampling for sample rates that are integral multiples of 16KHz.
36
+
If a microphone array is specified using `AudioConfig::FromMicrophoneInput`, then we use the specified microphone. If a microphone is not specified or `AudioConfig::FromDefaultMicrophoneInput` is called, then we use the default microphone, which is specified in Sound settings on Windows.
37
+
The Microsoft Audio Stack in the Speech Devices SDK only supports down sampling for sample rates that are integral multiples of 16 KHz.
38
38
39
39
## Linux
40
-
On Linux the microphone geometry information must be provided. The use of `DeviceGeometry` and `SelectedGeometry` remains supported. It can also be provided via the JSON file using the `MicArrayGeometryConfigFile` property. Similar to Windows, the beamforming range cane be provided by the JSON file.
40
+
On Linux, the microphone geometry information must be provided. The use of `DeviceGeometry` and `SelectedGeometry` remains supported. It can also be provided via the JSON file using the `MicArrayGeometryConfigFile` property. Similar to Windows, the beamforming range can be provided by the JSON file.
41
41
42
42
If a microphone array is specified using `AudioConfig::FromMicrophoneInput`, then we use the specified microphone. If a microphone is not specified or `AudioConfig::FromDefaultMicrophoneInput` is called, then we record from ALSA device named *default*. By default, *default* always points to card 0 device 0, but users can change it in the `asound.conf` file.
43
43
44
-
The Microsoft Audio Stack in the Speech Devices SDK only supports downsampling for sample rates that are integral multiples of 16KHz. Additionally the following formats are supported: 32-bit IEEE little endian float, 32-bit little endian signed int, 24-bit little endian signed int, 16-bit little endian signed int and 8-bit signed int.
44
+
The Microsoft Audio Stack in the Speech Devices SDK only supports downsampling for sample rates that are integral multiples of 16 KHz. Additionally the following formats are supported: 32-bit IEEE little endian float, 32-bit little endian signed int, 24-bit little endian signed int, 16-bit little endian signed int, and 8-bit signed int.
45
45
46
46
## Android
47
-
Currently only [Roobo v1](speech-devices-sdk-android-quickstart.md) is supported by te Speech Devices SDK. Behavior is same as previous releases, except now `MicArrayGeometryConfigFile` property can be used to specify JSON file containing beamforming range.
47
+
Currently only [Roobo v1](speech-devices-sdk-android-quickstart.md) is supported by the Speech Devices SDK. Behavior is same as previous releases, except now `MicArrayGeometryConfigFile` property can be used to specify JSON file containing beamforming range.
0 commit comments