You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/get-started-text-to-speech.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,14 @@
1
1
---
2
2
title: "Text to speech quickstart - Speech service"
3
3
titleSuffix: Azure AI services
4
-
description: In this quickstart, you convert text to speech. Learn about object construction and design patterns, supported audio output formats, and custom configuration options for speech synthesis.
4
+
description: In this quickstart, you convert text to speech. Learn about object construction and design patterns, supported audio formats, and custom configuration options.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/cli.md
+14-13Lines changed: 14 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
author: eric-urban
3
3
ms.service: cognitive-services
4
4
ms.topic: include
5
-
ms.date: 03/15/2022
5
+
ms.date: 08/25/2023
6
6
ms.author: eur
7
7
---
8
8
@@ -18,30 +18,31 @@ ms.author: eur
18
18
19
19
## Synthesize to speaker output
20
20
21
-
Run the following command for speech synthesis to the default speaker output. You can modify the text to be synthesized and the voice.
21
+
Run the following command for speech synthesis to the default speaker output. You can modify the voice and the text to be synthesized.
22
22
23
23
```console
24
24
spx synthesize --text "I'm excited to try text to speech" --voice "en-US-JennyNeural"
25
25
```
26
26
27
-
If you don't set a voice name, the default voice for `en-US`will speak. All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is "I'm excited to try text to speech" and you set `--voice "es-ES-ElviraNeural"`, the text is spoken in English with a Spanish accent. If the voice does not speak the language of the input text, the Speech service won't output synthesized audio.
27
+
If you don't set a voice name, the default voice for `en-US`speaks.
28
28
29
-
## Remarks
29
+
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is "I'm excited to try text to speech" and you set `--voice "es-ES-ElviraNeural"`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
30
30
31
-
Now that you've completed the quickstart, here are some additional considerations:
31
+
## Remarks
32
32
33
33
You can have finer control over voice styles, prosody, and other settings by using [Speech Synthesis Markup Language (SSML)](~/articles/ai-services/speech-service/speech-synthesis-markup.md).
34
34
35
-
In the following example, the voice and style ('excited') are provided in the SSML block.
35
+
-In the following example, the voice and style, `excited`, are provided in the SSML block.
36
36
37
-
```console
38
-
spx synthesize --ssml "<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='https://www.w3.org/2001/mstts' xml:lang='en-US'><voice name='en-US-JennyNeural'><mstts:express-as style='excited'>I'm excited to try text to speech</mstts:express-as></voice></speak>"
39
-
```
37
+
```console
38
+
spx synthesize --ssml "<speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='https://www.w3.org/2001/mstts' xml:lang='en-US'><voice name='en-US-JennyNeural'><mstts:express-as style='excited'>I'm excited to try text to speech</mstts:express-as></voice></speak>"
39
+
```
40
40
41
-
Run this command for information about additional speech synthesis options such as file input and output:
42
-
```console
43
-
spx help synthesize
44
-
```
41
+
- Run this command for information about more speech synthesis options such as file input and output:
The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) and implements .NET Standard 2.0. You install the Speech SDK later in this guide, but first check the [SDK installation guide](../../../quickstarts/setup-platform.md?pivots=programming-language-cpp) for any more requirements.
18
+
19
+
The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) that implements .NET Standard 2.0. Install the Speech SDK later in this guide. For any requirements, see [Install the Speech SDK](../../../quickstarts/setup-platform.md?pivots=programming-language-cpp).
19
20
20
21
### Set environment variables
21
22
@@ -25,13 +26,15 @@ The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/
25
26
26
27
Follow these steps to create a new console application and install the Speech SDK.
27
28
28
-
1. Create a new C++ console project in Visual Studio Community 2022 named `SpeechSynthesis`.
29
+
1. Create a C++ console project in [Visual Studio Community](https://visualstudio.microsoft.com/downloads/) named `SpeechSynthesis`.
29
30
1. Install the Speech SDK in your new project with the NuGet package manager.
1. Replace the contents of *SpeechSynthesis.cpp* with the following code:
37
+
35
38
```cpp
36
39
#include<iostream>
37
40
#include<stdlib.h>
@@ -108,26 +111,28 @@ Follow these steps to create a new console application and install the Speech SD
108
111
}
109
112
```
110
113
111
-
1. To change the speech synthesis language, replace `en-US-JennyNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#prebuilt-neural-voices). All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is "I'm excited to try text to speech" and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice does not speak the language of the input text, the Speech service won't output synthesized audio.
114
+
1. To change the speech synthesis language, replace `en-US-JennyNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#prebuilt-neural-voices).
115
+
116
+
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is "I'm excited to try text to speech" and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
112
117
113
-
[Build and run your new console application](/cpp/build/vscpp-step-2-build) to start speech synthesis to the default speaker.
118
+
1.[Build and run your new console application](/cpp/build/vscpp-step-2-build) to start speech synthesis to the default speaker.
114
119
115
-
> [!IMPORTANT]
116
-
> Make sure that you set the `SPEECH_KEY` and `SPEECH_REGION` environment variables as described [above](#set-environment-variables). If you don't set these variables, the sample will fail with an error message.
120
+
> [!IMPORTANT]
121
+
> Make sure that you set the `SPEECH_KEY` and `SPEECH_REGION` environment variables as described in [Set environment variables](#set-environment-variables). If you don't set these variables, the sample fails with an error message.
117
122
118
-
Enter some text that you want to speak. For example, type "I'm excited to try text to speech." Press the Enter key to hear the synthesized speech.
123
+
1.Enter some text that you want to speak. For example, type *I'm excited to try text to speech*. Select the **Enter** key to hear the synthesized speech.
119
124
120
-
```console
121
-
Enter some text that you want to speak >
122
-
I'm excited to try text to speech
123
-
```
125
+
```console
126
+
Enter some text that you want to speak >
127
+
I'm excited to try text to speech
128
+
```
124
129
125
130
## Remarks
126
-
Now that you've completed the quickstart, here are some additional considerations:
127
131
128
132
This quickstart uses the `SpeakTextAsync` operation to synthesize a short block of text that you enter. You can also get text from files as described in these guides:
129
-
- For information about speech synthesis from a file and finer control over voice styles, prosody, and other settings, see [How to synthesize speech](~/articles/ai-services/speech-service/how-to-speech-synthesis.md) and [Improve synthesis with Speech Synthesis Markup Language (SSML)](~/articles/ai-services/speech-service/speech-synthesis-markup.md).
130
-
- For information about synthesizing long-form text to speech, see [batch synthesis](~/articles/ai-services/speech-service/batch-synthesis.md).
133
+
134
+
- For information about speech synthesis from a file and finer control over voice styles, prosody, and other settings, see [How to synthesize speech](~/articles/ai-services/speech-service/how-to-speech-synthesis.md) and [Speech Synthesis Markup Language (SSML) overview](~/articles/ai-services/speech-service/speech-synthesis-markup.md).
135
+
- For information about synthesizing long-form text to speech, see [Batch synthesis API for text to speech](~/articles/ai-services/speech-service/batch-synthesis.md).
The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) and implements .NET Standard 2.0. You install the Speech SDK later in this guide, but first check the [SDK installation guide](../../../quickstarts/setup-platform.md?pivots=programming-language-csharp) for any more requirements.
18
+
19
+
The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) that implements .NET Standard 2.0. Install the Speech SDK later in this guide. For any requirements, see [Install the Speech SDK](../../../quickstarts/setup-platform.md?pivots=programming-language-csharp).
19
20
20
21
### Set environment variables
21
22
@@ -25,16 +26,20 @@ The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/
25
26
26
27
Follow these steps to create a new console application and install the Speech SDK.
27
28
28
-
1. Open a command prompt where you want the new project, and create a console application with the .NET CLI. The `Program.cs` file should be created in the project directory.
29
-
```dotnetcli
30
-
dotnet new console
31
-
```
29
+
1. Open a command prompt where you want the new project. Run this command to create a console application with the .NET CLI. The command creates a *Program.cs* file in the project directory.
30
+
31
+
```dotnetcli
32
+
dotnet new console
33
+
```
34
+
32
35
1. Install the Speech SDK in your new project with the .NET CLI.
1. Replace the contents of *Program.cs* with the following code.
42
+
38
43
```csharp
39
44
usingSystem;
40
45
usingSystem.IO;
@@ -94,30 +99,32 @@ Follow these steps to create a new console application and install the Speech SD
94
99
}
95
100
```
96
101
97
-
1. To change the speech synthesis language, replace `en-US-JennyNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#prebuilt-neural-voices). All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is "I'm excited to try text to speech" and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice does not speak the language of the input text, the Speech service won't output synthesized audio.
> Make sure that you set the `SPEECH_KEY` and `SPEECH_REGION` environment variables as described [above](#set-environment-variables). If you don't set these variables, the sample will fail with an error message.
108
+
```console
109
+
dotnetrun
110
+
```
107
111
108
-
Enter some text that you want to speak. For example, type "I'm excited to try text to speech." Press the Enter key to hear the synthesized speech.
112
+
> [!IMPORTANT]
113
+
> Make sure that you set the `SPEECH_KEY` and `SPEECH_REGION` environment variables as described in [Set environment variables](#set-environment-variables). If you don't set these variables, the sample fails with an error message.
109
114
110
-
```console
111
-
Enter some text that you want to speak >
112
-
I'm excited to try text to speech
113
-
```
115
+
1. Enter some text that you want to speak. For example, type *I'm excited to try text to speech*. Select the **Enter** key to hear the synthesized speech.
116
+
117
+
```console
118
+
Enter some text that you want to speak >
119
+
I'm excited to try text to speech
120
+
```
114
121
115
122
## Remarks
116
-
Now that you've completed the quickstart, here are some additional considerations:
117
123
118
124
This quickstart uses the `SpeakTextAsync` operation to synthesize a short block of text that you enter. You can also get text from files as described in these guides:
119
-
- For information about speech synthesis from a file and finer control over voice styles, prosody, and other settings, see [How to synthesize speech](~/articles/ai-services/speech-service/how-to-speech-synthesis.md) and [Improve synthesis with Speech Synthesis Markup Language (SSML)](~/articles/ai-services/speech-service/speech-synthesis-markup.md).
120
-
- For information about synthesizing long-form text to speech, see [batch synthesis](~/articles/ai-services/speech-service/batch-synthesis.md).
125
+
126
+
- For information about speech synthesis from a file and finer control over voice styles, prosody, and other settings, see [How to synthesize speech](~/articles/ai-services/speech-service/how-to-speech-synthesis.md) and [Speech Synthesis Markup Language (SSML) overview](~/articles/ai-services/speech-service/speech-synthesis-markup.md).
127
+
- For information about synthesizing long-form text to speech, see [Batch synthesis API for text to speech](~/articles/ai-services/speech-service/batch-synthesis.md).
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/go.md
+20-18Lines changed: 20 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
author: eric-urban
3
3
ms.service: cognitive-services
4
4
ms.topic: include
5
-
ms.date: 03/15/2022
5
+
ms.date: 08/25/2023
6
6
ms.author: eur
7
7
---
8
8
@@ -16,18 +16,18 @@ ms.author: eur
16
16
17
17
## Set up the environment
18
18
19
-
Install the [Speech SDK for Go](../../../quickstarts/setup-platform.md?pivots=programming-language-go&tabs=dotnet%252cwindows%252cjre%252cbrowser). Check the [SDK installation guide](../../../quickstarts/setup-platform.md?pivots=programming-language-go) for any more requirements.
19
+
Install the Speech SDK for Go. For requirements and instructions, see [Install the Speech SDK](../../../quickstarts/setup-platform.md?pivots=programming-language-go).
1. Open a command prompt where you want the new module, and create a new file named `speech-synthesis.go`.
30
-
1. Copy the following code into `speech_synthesis.go`:
29
+
1. Open a console window where you want the new module, and then create a new file named *speech-synthesis.go*.
30
+
1. Copy the following code into *speech-synthesis.go*:
31
31
32
32
```go
33
33
package main
@@ -134,24 +134,26 @@ Follow these steps to create a new GO module.
134
134
}
135
135
```
136
136
137
-
1. To change the speech synthesis language, replace `en-US-JennyNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#prebuilt-neural-voices).All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is "I'm excited to try text to speech" and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice does not speak the language of the input text, the Speech service won't output synthesized audio.
137
+
1. To change the speech synthesis language, replace `en-US-JennyNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#prebuilt-neural-voices).
138
138
139
-
Run the following commands to create a `go.mod` file that links to components hosted on GitHub:
139
+
All neural voices are multilingual and fluent in their own language and English. For example, ifthe input text in English is "I'm excited to try text to speech" and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
140
140
141
-
```cmd
142
-
go mod init speech-synthesis
143
-
go get github.com/Microsoft/cognitive-services-speech-sdk-go
144
-
```
141
+
1. Run the following commands to create a *go.mod* file that links to components hosted on GitHub:
145
142
146
-
> [!IMPORTANT]
147
-
> Make sure that you set the `SPEECH_KEY` and `SPEECH_REGION` environment variables as described [above](#set-environment-variables). If you don't set these variables, the sample will fail with an error message.
143
+
```console
144
+
go mod init speech-synthesis
145
+
go get github.com/Microsoft/cognitive-services-speech-sdk-go
146
+
```
148
147
149
-
Now build and run the code:
148
+
> [!IMPORTANT]
149
+
> Make sure that you set the `SPEECH_KEY` and `SPEECH_REGION` environment variables as described in [Set environment variables](#set-environment-variables). If you don't set these variables, the sample fails with an error message.
0 commit comments