You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ These features make up the Speech service. Use the links in this table to learn
27
27
||[Batch Transcription](batch-transcription.md)| Batch transcription enables asynchronous speech-to-text transcription of large volumes of data. This is a REST-based service, which uses same endpoint as customization and model management. | No |[Yes](https://westus.cris.ai/swagger/ui/index)|
28
28
||[Conversation Transcription](conversation-transcription-service.md)| Enables real-time speech recognition, speaker identification, and diarization. It's perfect for transcribing in-person meetings with the ability to distinguish speakers. | Yes | No |
29
29
||[Create Custom Speech Models](#customize-your-speech-experience)| If you are using speech-to-text for recognition and transcription in a unique environment, you can create and train custom acoustic, language, and pronunciation models to address ambient noise or industry-specific vocabulary. | No |[Yes](https://westus.cris.ai/swagger/ui/index)|
30
-
|[Text-to-Speech](text-to-speech.md)| Text-to-speech | Text-to-speech converts input text into human-like synthesized speech using [Speech Synthesis Markup Language (SSML)](text-to-speech.md#speech-synthesis-markup-language-ssml). Choose from standard voices and neural voices (see [Language support](language-support.md)). |[Yes](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-sdk-reference)|[Yes](https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-apis)|
30
+
|[Text-to-Speech](text-to-speech.md)| Text-to-speech | Text-to-speech converts input text into human-like synthesized speech using [Speech Synthesis Markup Language (SSML)](text-to-speech.md#core-features). Choose from standard voices and neural voices (see [Language support](language-support.md)). |[Yes](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-sdk-reference)|[Yes](https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-apis)|
31
31
||[Create Custom Voices](#customize-your-speech-experience)| Create custom voice fonts unique to your brand or product. | No |[Yes](https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-apis)|
32
32
|[Speech Translation](speech-translation.md)| Speech translation | Speech translation enables real-time, multi-language translation of speech to your applications, tools, and devices. Use this service for speech-to-speech and speech-to-text translation. |[Yes](https://docs.microsoft.com/azure/cognitive-services/speech-service/speech-sdk-reference)| No |
33
33
|[Voice assistants](voice-assistants.md)| Voice assistants | Voice assistants using the Speech service empower developers to create natural, human-like conversational interfaces for their applications and experiences. The voice assistant service provides fast, reliable interaction between a device and an assistant implementation that uses the Bot Framework's Direct Line Speech channel or the integrated Custom Commands (Preview) service for task completion. |[Yes](voice-assistants.md)| No |
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/releasenotes.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -152,7 +152,7 @@ This is a JavaScript-only release. No features have been added. The following fi
152
152
153
153
**New features**
154
154
155
-
- The SDK now supports the text-to-speech service as a beta version. It is supported on Windows and Linux Desktop from C++ and C#. For more information, check the [text-to-speech overview](text-to-speech.md#get-started-with-text-to-speech).
155
+
- The SDK now supports the text-to-speech service as a beta version. It is supported on Windows and Linux Desktop from C++ and C#. For more information, check the [text-to-speech overview](text-to-speech.md#get-started).
156
156
- The SDK now supports MP3 and Opus/OGG audio files as stream input files. This feature is available only on Linux from C++ and C# and is currently in beta (more details [here](how-to-use-codec-compressed-audio-input-streams.md)).
157
157
- The Speech SDK for Java, .NET core, C++ and Objective-C have gained macOS support. The Objective-C support for macOS is currently in beta.
158
158
- iOS: The Speech SDK for iOS (Objective-C) is now also published as a CocoaPod.
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/text-to-speech.md
+30-71Lines changed: 30 additions & 71 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,47 +5,55 @@ description: The text-to-speech feature in the Speech service enables your appli
5
5
services: cognitive-services
6
6
author: erhopf
7
7
manager: nitinme
8
-
9
8
ms.service: cognitive-services
10
9
ms.subservice: speech-service
11
10
ms.topic: conceptual
12
-
ms.date: 06/24/2019
11
+
ms.date: 12/10/2019
13
12
ms.author: erhopf
14
13
---
15
14
16
15
# What is text-to-speech?
17
16
18
-
Text-to-speech from the Speech service enables your applications, tools, or devices to convert text into natural human-like synthesized speech. Choose from standard and neural voices, or create your own custom voice unique to your product or brand. 75+ standard voices are available in more than 45 languages and locales, and 5 neural voices are available in 4 languages and locales. For a full list, see [supported languages](language-support.md#text-to-speech).
17
+
Text-to-speech from the Speech service enables your applications, tools, or devices to convert text into human-like synthesized speech. Choose from standard and neural voices, or create a custom voice unique to your product or brand. 75+ standard voices are available in more than 45 languages and locales, and 5 neural voices are available in a select number of languages and locales. For a full list of supported voices, languages, and locales, see [supported languages](language-support.md#text-to-speech).
18
+
19
+
> [!NOTE]
20
+
> Bing Speech was decommissioned on October 15, 2019. If your applications, tools, or products are using the Bing Speech APIs or Custom Speech, we've created guides to help you migrate to the Speech service.
21
+
> -[Migrate from Bing Speech to the Speech service](how-to-migrate-from-bing-speech.md)
22
+
23
+
## Core features
19
24
20
-
Text-to-speech technology allows content creators to interact with their users in different ways. Text-to-speech can improve accessibility by providing users with an option to interact with content audibly. Whether the user has a visual impairment, a learning disability, or requires navigation information while driving, text-to-speech can improve an existing experience. Text-to-speech is also a valuable add-on for voice bots and voice assistants.
25
+
* Speech synthesis - Use the [Speech SDK](quickstarts/text-to-speech-audio-file.md)or [REST API](rest-text-to-speech.md) to convert text-to-speech using standard, neural, or custom voices.
21
26
22
-
By leveraging Speech Synthesis Markup Language (SSML), an XML-based markup language, developers using the text-to-speech service can specify how input text is converted into synthesized speech. With SSML, you can adjust pitch, pronunciation, speaking rate, volume, and more. For more information, see [SSML](#speech-synthesis-markup-language-ssml).
27
+
* Asynchronous synthesis of long audio - Use the [Long Audio API](long-audio-api.md) to asynchronously synthesize text-to-speech files longer than 10 minutes (for example audio books or lectures). Unlike synthesis performed using the Speech SDK or speech-to-text REST API, responses aren't returned in real time. The expectation is that requests are sent asynchronously, responses are polled for, and that the synthesized audio is downloaded when made available from the service. Only neural voices are supported.
23
28
24
-
###Standard voices
29
+
* Standard voices - Created using Statistical Parametric Synthesis and/or Concatenation Synthesis techniques. These voices are highly intelligible and sound natural. You can easily enable your applications to speak in more than 45 languages, with a wide range of voice options. These voices provide high pronunciation accuracy, including support for abbreviations, acronym expansions, date/time interpretations, polyphones, and more. For a full list of standard voices, see [supported languages](language-support.md#text-to-speech).
25
30
26
-
Standard voices are created using Statistical Parametric Synthesis and/or Concatenation Synthesis techniques. These voices are highly intelligible and sound natural. You can easily enable your applications to speak in more than 45 languages, with a wide range of voice options. These voices provide high pronunciation accuracy, including support for abbreviations, acronym expansions, date/time interpretations, polyphones, and more. Use standard voice to improve accessibility for your applications and services by allowing users to interact with your content audibly.
31
+
* Neural voices - Deep neural networks are used to overcome the limits of traditional speech synthesis with regards to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of neural voices, see [supported languages](language-support.md#text-to-speech).
27
32
28
-
### Neural voices
33
+
* Speech Synthesis Markup Language (SSML) - An XML-based markup language used to customize speech-to-text outputs. With SSML, you can adjust pitch, add pauses, improve pronunciation, speed up or slow down speaking rate, increase or decrease volume, and attribute multiple voices to a single document. See [SSML](speech-synthesis-markup.md).
29
34
30
-
Neural voices use deep neural networks to overcome the limits of traditional text-to-speech systems in matching the patterns of stress and intonation in spoken language, and in synthesizing the units of speech into a computer voice. Standard text-to-speech breaks down prosody into separate linguistic analysis and acoustic prediction steps that are governed by independent models, which can result in muffled voice synthesis. Our neural capability does prosody prediction and voice synthesis simultaneously, which results in a more fluid and natural-sounding voice.
35
+
## Get started
31
36
32
-
Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems.
37
+
The text-to-speech service is available via the [Speech SDK](speech-sdk.md). There are several common scenarios available as quickstarts, in various languages and platforms:
33
38
34
-
Neural voices support different styles, such as neutral and cheerful. For example, the Jessa (en-US) voice can speak cheerfully, which is optimized for warm, happy conversation. You can adjust the voice output, like tone, pitch, and speed using [Speech Synthesis Markup Language](speech-synthesis-markup.md). For a full list of available voices, see [supported languages](language-support.md#text-to-speech).
39
+
*[Synthesize speech into an audio file](quickstarts/text-to-speech-audio-file.md)
40
+
*[Synthesize speech to a speaker](quickstarts/text-to-speech.md)
To learn more about the benefits of neural voices, see [Microsoft’s new neural text-to-speech service helps machines speak like people](https://azure.microsoft.com/blog/microsoft-s-new-neural-text-to-speech-service-helps-machines-speak-like-people/).
43
+
If you prefer, the text-to-speech service is accessible via [REST](rest-text-to-speech.md).
37
44
38
-
### Custom voices
45
+
##Sample code
39
46
40
-
Voice customization lets you create a recognizable, one-of-a-kind voice for your brand. To create your custom voice font, you make a studio recording and upload the associated scripts as the training data. The service then creates a unique voice model tuned to your recording. You can use this custom voice font to synthesize speech. For more information, see [custom voices](how-to-customize-voice-font.md).
47
+
Sample code for text-to-speech is available on GitHub. These samples cover text-to-speech conversion in most popular programming languages.
Speech Synthesis Markup Language (SSML) is an XML-based markup language that lets developers specify how input text is converted into synthesized speech using the text-to-speech service. Compared to plain text, SSML allows developers to fine-tune the pitch, pronunciation, speaking rate, volume, and more of the text-to-speech output. Normal punctuation, such as pausing after a period, or using the correct intonation when a sentence ends with a question mark are automatically handled.
52
+
## Customization
45
53
46
-
All text inputs sent to the text-to-speech service must be structured as SSML. For more information, see [Speech Synthesis Markup Language](speech-synthesis-markup.md).
54
+
In addition to standard and neural voices, you can create and fine-tune custom voices unique to your product or brand. All it takes to get started are a handful of audio files and the associated transcriptions. For more information, see [Get started with Custom Voice](how-to-custom-voice.md)
47
55
48
-
###Pricing note
56
+
## Pricing note
49
57
50
58
When using the text-to-speech service, you are billed for each character that is converted to speech, including punctuation. While the SSML document itself is not billable, optional elements that are used to adjust how the text is converted to speech, like phonemes and pitch, are counted as billable characters. Here's a list of what's billable:
51
59
@@ -59,61 +67,12 @@ For detailed information, see [Pricing](https://azure.microsoft.com/pricing/deta
59
67
> [!IMPORTANT]
60
68
> Each Chinese, Japanese, and Korean language character is counted as two characters for billing.
61
69
62
-
## Core features
63
-
64
-
This table lists the core features for text-to-speech:
| Upload datasets for voice adaptation. | No | Yes\*|
70
-
| Create and manage voice font models. | No | Yes\*|
71
-
| Create and manage voice font deployments. | No | Yes\*|
72
-
| Create and manage voice font tests. | No | Yes\*|
73
-
| Manage subscriptions. | No | Yes\*|
74
-
75
-
\*_These services are available using the cris.ai endpoint. See [Swagger reference](https://westus.cris.ai/swagger/ui/index). These custom voice training and management APIs implement throttling that limits requests to 25 per 5 seconds, while the speech synthesis API itself implements throttling that allows 200 requests per second as the highest. When throttling occurs, you'll be notified via message headers._
76
-
77
-
## Get started with text to speech
78
-
79
-
We offer quickstarts designed to have you running code in less than 10 minutes. This table includes a list of text-to-speech quickstarts organized by language.
80
-
81
-
### SDK quickstarts
82
-
83
-
| Quickstart (SDK) | Platform | API Reference |
84
-
| ---------------- | -------- | ------------- |
85
-
|[C#, .NET Core](~/articles/cognitive-services/Speech-Service/quickstarts/text-to-speech.md?pivots=programming-language-csharp&tabs=dotnetcore)| Windows |[Browse](https://aka.ms/csspeech/csharpref)|
86
-
|[C#, .NET Framework](~/articles/cognitive-services/Speech-Service/quickstarts/text-to-speech.md?pivots=programming-language-csharp&tabs=dotnet)| Windows |[Browse](https://aka.ms/csspeech/csharpref)|
87
-
|[C#, UWP](~/articles/cognitive-services/Speech-Service/quickstarts/text-to-speech.md?pivots=programming-language-csharp&tabs=uwp)| Windows |[Browse](https://aka.ms/csspeech/csharpref)|
|[C++](~/articles/cognitive-services/Speech-Service/quickstarts/text-to-speech.md?pivots=programming-language-cpp&tabs=windows)| Windows |[Browse](https://aka.ms/csspeech/cppref)|
90
-
|[C++](~/articles/cognitive-services/Speech-Service/quickstarts/text-to-speech.md?pivots=programming-language-cpp&tabs=linux)| Linux |[Browse](https://aka.ms/csspeech/cppref)|
|[C#, .NET Core](~/articles/cognitive-services/Speech-Service/quickstarts/text-to-speech.md?pivots=programming-language-csharp)| Windows, macOS, Linux |[Browse](https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-apis)|
104
-
|[Node.js](quickstart-nodejs-text-to-speech.md)| Window, macOS, Linux |[Browse](https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-apis)|
105
-
|[Python](quickstart-python-text-to-speech.md)| Window, macOS, Linux |[Browse](https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-apis)|
106
-
107
-
## Sample code
108
-
109
-
Sample code for text-to-speech is available on GitHub. These samples cover text-to-speech conversion in most popular programming languages.
0 commit comments