Skip to content

Commit 3317ca0

Browse files
Merge pull request #261202 from eric-urban/eur/audio-update
audio update
2 parents 771970b + 8f1dae6 commit 3317ca0

File tree

1 file changed

+6
-5
lines changed

1 file changed

+6
-5
lines changed

articles/ai-services/speech-service/concepts/audio-concepts.md

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -18,20 +18,20 @@ The Speech service accepts and provides audio in multiple formats, and the area
1818

1919
Speech is inherently analog, which is approximated by converting it to a digital signal by sampling. The number of times it's sampled per second is the sampling rate, and how accurate each sample is defined by the bit-depth.
2020

21-
### Sample Rate
22-
How many audio samples there are per second. A higher sampling rate will more accurately reproduce higher frequencies such as music. Humans can typically hear between 20 Hz and 20 kHz but most sensitive up to 5 kHz. The sample rate needs to be twice the highest frequency so for human speech a 16 kHz sampling rate is normally adequate, but a higher sampling rate can provide a higher quality although larger files. The default for both STT and TTS is 16 kHz, however 48 kHz is recommended for audio books. Some source audio is in 8 kHz, especially when coming from legacy telecom systems, which will result in degraded results.
21+
### Sample rate
22+
How many audio samples there are per second. A higher sampling rate will more accurately reproduce higher frequencies such as music. Humans can typically hear between 20 Hz and 20 kHz but most sensitive up to 5 kHz. The sample rate needs to be twice the highest frequency so for human speech a 16 kHz sampling rate is normally adequate, but a higher sampling rate can provide a higher quality although larger files. The default for both speech to text and text to speech is 16 kHz, however 48 kHz is recommended for audio books. Some source audio is in 8 kHz, especially when coming from legacy telecom systems, which will result in degraded results.
2323

2424
### Bit-depth
2525
Uncompressed audio samples are each represented by many bits that define its accuracy or resolution. For human speech 13 bits are needed, which is rounded up to a 16 bit sample. A higher bit-depth would be needed for professional audio or music. Legacy telephony systems often use 8 bits with compression, but it isn't ideal.
2626

2727
### Channels
28-
The speech service typically expects and provides a mono stream. The behavior of stereo and multi-channel files is API specific, for example the REST STT will split a stereo file and generate a result for each channel. TTS is mono only.
28+
The Speech service typically expects and provides a mono stream. The behavior of stereo and multi-channel files is API specific, for example the speech to text REST API will split a stereo file and generate a result for each channel. Text to speech is mono only.
2929

3030
## Audio formats and codecs
3131

32-
For the Speech service to be able to use the audio it needs to know how it's encoded. Also as audio files can be relatively large it's common to use compression to reduce their size. Audio files and streams can be described by their container format and the audio codec. Common containers are WAV or MP4 and common audio formats are PCM or MP3. You normally can't presume that a container uses a specific audio format, for instance .WAV files often contain PCM data but other audio formats are possible.
32+
For the Speech service to be able to use the audio it needs to know how it's encoded. Also as audio files can be relatively large it's common to use compression to reduce their size. Audio files and streams can be described by their container format and the audio codec. Common containers are WAV or MP4 and common audio formats are PCM or MP3. You normally can't presume that a container uses a specific audio format, for instance WAV files often contain PCM data but other audio formats are possible.
3333

34-
### Uncompressed Audio
34+
### Uncompressed audio
3535

3636
The Speech service internally works on uncompressed audio, which is encoded with Pulse Code Modulation (or PCM). This means that every sample represents the amplitude of the signal. This is a simple representation for processing, but not space efficient so compression is often used for transporting audio.
3737

@@ -40,6 +40,7 @@ The Speech service internally works on uncompressed audio, which is encoded with
4040
Lossy algorithms might enable greater compression resulting in smaller files or lower bandwidth, which can be important on mobile connections or busy networks. A common audio format is MP3, which is an example of lossy compression. MP3 files are significantly smaller than the originals, and might sound nearly identical to the original, but you can't recreate the exact source file. Lossy compression works by removing parts of the audio or approximating it. When encoding with a lossy algorithm you trade off bandwidth for accuracy.
4141

4242
MP3 was designed for music rather than speech.
43+
4344
AMR and AMR-WB were designed to efficiently compress speech for mobile phones, and won't work as well representing music or noise.
4445

4546
A-Law and Mu-Law are older algorithms that compress each sample by itself, and converts a 16 bit sample to 8 bit using a logarithmic quantization technique. It should only be used to support legacy systems.

0 commit comments

Comments
 (0)