Skip to content

Commit b15cc81

Browse files
committed
hd voices
1 parent 024e54f commit b15cc81

File tree

5 files changed

+105
-5
lines changed

5 files changed

+105
-5
lines changed
Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
---
2+
title: What are neural text to speech HD voices?
3+
titleSuffix: Azure AI services
4+
description: Learn about neural text to speech HD voices that you can use with speech synthesis.
5+
author: eric-urban
6+
ms.author: eur
7+
ms.reviewer: v-baolianzou
8+
manager: nitinme
9+
ms.service: azure-ai-speech
10+
ms.topic: overview
11+
ms.date: 10/9/2024
12+
ms.custom: references_regions
13+
#customer intent: As a user who implements text to speech, I want to understand the options and differences between available neural text to speech HD voices in Azure AI Speech.
14+
---
15+
16+
# What are high definition voices? (Preview)
17+
18+
Azure AI speech continues to advance in the field of text to speech technology with the introduction of neural text to speech high definition (HD) voices. The HD voices can understand the content, automatically detect emotions in the input text, and adjust the speaking tone in real-time to match the sentiment. HD voices maintain a consistent voice persona from their neural (and non HD) counterparts, and deliver even more value through enhanced features.
19+
20+
## Key features of neural text to speech HD voices
21+
22+
The following are the key features of Azure AI Speech HD voices:
23+
24+
| Key features | Description |
25+
|--------------|-------------|
26+
| **Human-like speech generation** | Neural text to speech HD voices can generate highly natural and human-like speech. The model is trained on millions of hours of multilingual data, enabling it to accurately interpret input text and generate speech with the appropriate emotion, pace, and rhythm without manual adjustments. |
27+
| **Version control** | With neural text to speech HD voices, we release different versions of the same voice, each with a unique base model size and recipe. This offers you the opportunity to experience new voice variations or continue using a specific version of a voice. |
28+
| **High fidelity** | The primary objective of neural text to speech HD voices is to generate high-fidelity audio. The synthetic speech produced by our system can closely mimic human speech in both quality and naturalness. |
29+
30+
## Comparison of Azure AI Speech HD voices to other Azure text to speech voices
31+
32+
How do Azure AI Speech HD voices compare to other Azure text to speech voices? How do they differ in terms of features and capabilities?
33+
34+
Here's a comparison of features between Azure AI Speech HD voices, Azure OpenAI HD voices, and Azure AI Speech voices:
35+
36+
| Feature | Azure AI Speech HD voices | Azure OpenAI HD voices | Azure AI Speech voices (not HD) |
37+
|---------|---------------|------------------------|------------------------|
38+
| **Region** | North Central US, Sweden Central | North Central US, Sweden Central | Available in dozens of regions. See the [region list](regions.md#speech-service).|
39+
| **Number of voices** | 12 | 6 | More than 500 |
40+
| **Multilingual** | No (perform on primary language only) | Yes | Yes (applicable only to multilingual voices) |
41+
| **SSML support** | Support for [a subset of SSML elements](#supported-and-unsupported-ssml-elements-for-azure-neural-text-to-speech-hd-voices).| Support for [a subset of SSML elements](openai-voices.md#ssml-elements-supported-by-openai-text-to-speech-voices-in-azure-ai-speech). | Support for the [full set of SSML](speech-synthesis-markup-structure.md) in Azure AI Speech. |
42+
| **Development options** | Speech SDK, Speech CLI, REST API | Speech SDK, Speech CLI, REST API | Speech SDK, Speech CLI, REST API |
43+
| **Deployment options** | Cloud only | Cloud only | Cloud, embedded, hybrid, and containers. |
44+
| **Real-time or batch synthesis** | Real-time only | Real-time and batch synthesis | Real-time and batch synthesis |
45+
| **Latency** | Less than 300 ms | Greater than 500 ms | Less than 300 ms |
46+
| **Sample rate of synthesized audio** | 8, 16, 22.05, 24, 44.1, and 48 kHz | 8, 16, 24, and 48 kHz | 8, 16, 22.05, 24, 44.1, and 48 kHz |
47+
| **Speech output audio format** | opus, mp3, pcm, truesilk | opus, mp3, pcm, truesilk | opus, mp3, pcm, truesilk |
48+
49+
## Supported and unsupported SSML elements for Azure AI Speech HD voices
50+
51+
The Speech Synthesis Markup Language (SSML) with input text determines the structure, content, and other characteristics of the text to speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags such as bookmark or viseme that your application processes later.
52+
53+
The Azure AI Speech HD voices don't support all SSML elements or events that other Azure AI Speech voices support. Of particular note, Azure AI Speech HD voices don't support [word boundary events](./how-to-speech-synthesis.md#subscribe-to-synthesizer-events).
54+
55+
For detailed information on the supported and unsupported SSML elements for Azure AI Speech HD voices, refer to the following table. For instructions on how to use SSML elements, refer to the [Speech Synthesis Markup Language (SSML) documentation](speech-synthesis-markup-structure.md).
56+
57+
| SSML element | Description | Supported in Azure AI Speech HD voices |
58+
|------------------------------|--------------------------------|-----------------------------------|
59+
| `<voice>` | Specifies the voice and optional effects (`eq_car` and `eq_telecomhp8k`). | Yes |
60+
| `<mstts:express-as>` | Specifies speaking styles and roles. | No |
61+
| `<mstts:ttsembedding>` | Specifies the `speakerProfileId` property for a personal voice. | No |
62+
| `<lang xml:lang>` | Specifies the speaking language. | Yes |
63+
| `<prosody>` | Adjusts pitch, contour, range, rate, and volume. | No |
64+
| `<emphasis>`| Adds or removes word-level stress for the text. | No|
65+
| `<audio>`| Embeds prerecorded audio into an SSML document. | No|
66+
| `<mstts:audioduration>` | Specifies the duration of the output audio. | No |
67+
| `<mstts:backgroundaudio>` | Adds background audio to your SSML documents or mixes an audio file with text to speech. | No |
68+
| `<phoneme>` |Specifies phonetic pronunciation in SSML documents. | No |
69+
| `<lexicon>` | Defines how multiple entities are read in SSML. | Yes (only supports alias) |
70+
| `<say-as>` | Indicates the content type, such as number or date, of the element's text. | Yes |
71+
| `<sub>` | Indicates that the alias attribute's text value should be pronounced instead of the element's enclosed text. | Yes |
72+
| `<math>` | Uses the MathML as input text to properly pronounce mathematical notations in the output audio. | No |
73+
| `<bookmark>` | Gets the offset of each marker in the audio stream. | No |
74+
| `<break>` | Overrides the default behavior of breaks or pauses between words. | No |
75+
| `<mstts:silence>` | Inserts pause before or after text, or between two adjacent sentences. | No |
76+
| `<mstts:viseme>` | Defines the position of the face and mouth while a person is speaking. | No |
77+
| `<p>` | Denotes paragraphs in SSML documents. | Yes |
78+
| `<s>` | Denotes sentences in SSML documents. | Yes |
79+
80+
> [!NOTE]
81+
> Although a [previous section in this guide](#comparison-of-azure-ai-speech-hd-voices-to-other-azure-text-to-speech-voices) also compared Azure AI Speech HD voices to Azure OpenAI HD voices, the SSML elements supported by Azure AI Speech aren't applicable to Azure OpenAI voices.
82+
83+
## Related content
84+
85+
- [Try the text to speech quickstart in Azure AI Speech](get-started-text-to-speech.md)
86+
- [Learn more about how to use SSML and events](speech-synthesis-markup-structure.md)

articles/ai-services/speech-service/includes/quickstarts/openai-speech/csharp.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -228,8 +228,8 @@ PS C:\dev\openai\csharp>
228228

229229
Here are some more considerations:
230230

231-
- To change the speech recognition language, replace `en-US` with another [supported language](~/articles/ai-services/speech-service/language-support.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US`. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/ai-services/speech-service/language-identification.md).
232-
- To change the voice that you hear, replace `en-US-JennyMultilingualNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#prebuilt-neural-voices). If the voice doesn't speak the language of the text returned from Azure OpenAI, the Speech service doesn't output synthesized audio.
231+
- To change the speech recognition language, replace `en-US` with another [supported language](~/articles/ai-services/speech-service/language-support.md?tabs=tts). For example, `es-ES` for Spanish (Spain). The default language is `en-US`. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/ai-services/speech-service/language-identification.md).
232+
- To change the voice that you hear, replace `en-US-JennyMultilingualNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md?tabs=tts#prebuilt-neural-voices). If the voice doesn't speak the language of the text returned from Azure OpenAI, the Speech service doesn't output synthesized audio.
233233
- To reduce latency for text to speech output, use the text streaming feature, which enables real-time text processing for fast audio generation and minimizes latency, enhancing the fluidity and responsiveness of real-time audio outputs. Refer to [how to use text streaming](~/articles/ai-services/speech-service/how-to-lower-speech-synthesis-latency.md#input-text-streaming).
234234
- To enable [TTS Avatar](~/articles/ai-services/speech-service/text-to-speech-avatar/what-is-text-to-speech-avatar.md) as a visual experience of speech output, refer to [real-time synthesis for text to speech avatar](~/articles/ai-services/speech-service/text-to-speech-avatar/real-time-synthesis-avatar.md) and [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser/avatar#chat-sample) for chat scenario with avatar.
235235
- Azure OpenAI also performs content moderation on the prompt inputs and generated outputs. The prompts or responses might be filtered if harmful content is detected. For more information, see the [content filtering](/azure/ai-services/openai/concepts/content-filter) article.

articles/ai-services/speech-service/speech-synthesis-markup-structure.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,14 @@ The Speech Synthesis Markup Language (SSML) with input text determines the struc
1717

1818
Refer to the sections below for details about how to structure elements in the SSML document.
1919

20+
> [!NOTE]
21+
> In addition to Azure AI Speech neural (non HD) voices, you can also use [Azure AI Speech high definition (HD) voices](high-definition-voices.md) and [Azure OpenAI neural (HD and non HD) voices](openai-voices.md). The HD voices provide a higher quality for more versatile scenarios.
22+
>
23+
> Some voices don't support all [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup-structure.md) tags. This includes neural text to speech HD voices, personal voices, and embedded voices.
24+
- For Azure AI Speech high definition (HD) voices, check the SSML support [here](high-definition-voices.md#supported-and-unsupported-ssml-elements-for-azure-neural-text-to-speech-hd-voices).
25+
- For personal voice, you can find the SSML support [here](personal-voice-how-to-use.md#supported-and-unsupported-ssml-elements-for-personal-voice).
26+
- For embedded voices, check the SSML support [here](embedded-speech.md#embedded-voices-capabilities).
27+
2028
## Document structure
2129

2230
The Speech service implementation of SSML is based on the World Wide Web Consortium's [Speech Synthesis Markup Language Version 1.0](https://www.w3.org/TR/2004/REC-speech-synthesis-20040907/). The elements supported by the Speech can differ from the W3C standard.

articles/ai-services/speech-service/text-to-speech.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ Here's more information about neural text to speech features in the Speech servi
4444
- Convert digital texts such as e-books into audiobooks.
4545
- Enhance in-car navigation systems.
4646

47-
For a full list of platform neural voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
47+
For a full list of prebuilt Azure AI Speech neural voices, see [Language and voice support for the Speech service](language-support.md?tabs=tts).
4848

4949
* **Improve text to speech output with SSML**: Speech Synthesis Markup Language (SSML) is an XML-based markup language used to customize text to speech outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, change speaking rate, adjust volume, and attribute multiple voices to a single document.
5050

@@ -55,9 +55,12 @@ Here's more information about neural text to speech features in the Speech servi
5555
By using viseme events in Speech SDK, you can generate facial animation data. This data can be used to animate faces in lip-reading communication, education, entertainment, and customer service. Viseme is currently supported only for the `en-US` (US English) [neural voices](language-support.md?tabs=tts).
5656

5757
> [!NOTE]
58-
> We plan to retire the traditional/standard voices and non-neural custom voice in 2024. After that, we'll no longer support them.
58+
> In addition to Azure AI Speech neural (non HD) voices, you can also use [Azure AI Speech high definition (HD) voices](high-definition-voices.md) and [Azure OpenAI neural (HD and non HD) voices](openai-voices.md). The HD voices provide a higher quality for more versatile scenarios.
5959
>
60-
> If your applications, tools, or products are using any of the standard voices and custom voices, you must migrate to the neural version. For more information, see [Migrate to neural voices](migration-overview-neural-voice.md).
60+
> Some voices don't support all [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup-structure.md) tags. This includes neural text to speech HD voices, personal voices, and embedded voices.
61+
- For Azure AI Speech high definition (HD) voices, check the SSML support [here](high-definition-voices.md#supported-and-unsupported-ssml-elements-for-azure-neural-text-to-speech-hd-voices).
62+
- For personal voice, you can find the SSML support [here](personal-voice-how-to-use.md#supported-and-unsupported-ssml-elements-for-personal-voice).
63+
- For embedded voices, check the SSML support [here](embedded-speech.md#embedded-voices-capabilities).
6164

6265
## Get started
6366

articles/ai-services/speech-service/toc.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -133,6 +133,9 @@ items:
133133
- name: Get facial position with viseme
134134
href: how-to-speech-synthesis-viseme.md
135135
displayName: viseme, phoneme, phonetic
136+
- name: Use high definition (HD) voices (preview)
137+
href: high-definition-voices.md
138+
displayName: hd voice
136139
- name: Custom neural voice
137140
items:
138141
- name: Custom neural voice overview

0 commit comments

Comments
 (0)