Skip to content

Commit 762a763

Browse files
committed
Freshness pass
1 parent f3b6771 commit 762a763

File tree

15 files changed

+61
-61
lines changed

15 files changed

+61
-61
lines changed

articles/ai-services/speech-service/get-started-text-to-speech.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
---
22
title: "Text to speech quickstart - Speech service"
33
titleSuffix: Azure AI services
4-
description: In this quickstart, you create an app that converts text to speech. Learn about supported audio formats and custom configuration options.
4+
description: Learn how to create an app that converts text to speech, and explore supported audio formats and custom configuration options.
55
author: eric-urban
66
manager: nitinme
77
ms.service: azure-ai-speech
88
ms.topic: quickstart
9-
ms.date: 01/29/2024
9+
ms.date: 08/07/2024
1010
ms.author: eur
1111
ms.devlang: cpp
1212
# ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
---
22
author: eric-urban
33
ms.service: azure-ai-speech
4-
ms.date: 02/17/2023
4+
ms.date: 08/07/2024
55
ms.topic: include
66
ms.author: eur
77
---
88

99
> [!div class="checklist"]
10-
> * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
11-
> * [Create a Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) in the Azure portal.
12-
> * Get the Language resource key and endpoint. After your Language resource is deployed, select **Go to resource** to view and manage keys.
10+
> * An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services).
11+
> * [Create a Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) in the Azure portal.
12+
> * Get the Language resource key and endpoint. After your Language resource is deployed, select **Go to resource** to view and manage keys.
1313
> * [Create a Speech resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) in the Azure portal.
14-
> * Get the Speech resource key and region. After your Speech resource is deployed, select **Go to resource** to view and manage keys.
14+
> * Get the Speech resource key and region. After your Speech resource is deployed, select **Go to resource** to view and manage keys.
Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
---
22
author: eric-urban
33
ms.service: azure-ai-speech
4-
ms.date: 08/24/2023
4+
ms.date: 08/07/2024
55
ms.topic: include
66
ms.author: eur
77
---
88

99
> [!div class="checklist"]
10-
> - Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
11-
> - <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Create a Speech resource</a> in the Azure portal.
12-
> - Your Speech resource key and region. After your Speech resource is deployed, select **Go to resource** to view and manage keys.
10+
> - An Azure subscription. You can [create one for free](https://azure.microsoft.com/free/cognitive-services).
11+
> - [Create a Speech resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices) in the Azure portal.
12+
> - Get the Speech resource key and region. After your Speech resource is deployed, select **Go to resource** to view and manage keys.

articles/ai-services/speech-service/includes/common/csharp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@ ms.topic: include
66
ms.author: eur
77
---
88

9-
[Reference documentation](/dotnet/api/microsoft.cognitiveservices.speech) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) | [Additional Samples on GitHub](https://aka.ms/speech/github-csharp)
9+
[Reference documentation](/dotnet/api/microsoft.cognitiveservices.speech) | [Package (NuGet)](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) | [Additional samples on GitHub](https://aka.ms/speech/github-csharp)

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/cli.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -16,19 +16,19 @@ ms.author: eur
1616

1717
[!INCLUDE [SPX Setup](../../spx-setup-quick.md)]
1818

19-
## Synthesize to speaker output
19+
## Send speech to speaker
2020

21-
Run the following command for speech synthesis to the default speaker output. You can modify the voice and the text to be synthesized.
21+
Run the following command to output speech synthesis to the default speaker. You can modify the voice and the text to be synthesized.
2222

2323
```console
2424
spx synthesize --text "I'm excited to try text to speech" --voice "en-US-AvaMultilingualNeural"
2525
```
2626

2727
If you don't set a voice name, the default voice for `en-US` speaks.
2828

29-
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is "I'm excited to try text to speech" and you set `--voice "es-ES-ElviraNeural"`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
29+
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is *I'm excited to try text to speech* and you set `--voice "es-ES-ElviraNeural"`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
3030

31-
Run this command for information about more speech synthesis options such as file input and output:
31+
Run this command for information about more speech synthesis options, such as file input and output:
3232

3333
```console
3434
spx help synthesize
@@ -40,9 +40,9 @@ spx help synthesize
4040

4141
You can have finer control over voice styles, prosody, and other settings by using [Speech Synthesis Markup Language (SSML)](~/articles/ai-services/speech-service/speech-synthesis-markup.md).
4242

43-
### OpenAI text to speech voices in Azure AI Speech
43+
### OpenAI text-to-speech voices in Azure AI Speech
4444

45-
OpenAI text to speech voices are also supported. See [OpenAI text to speech voices in Azure AI Speech](../../../openai-voices.md) and [multilingual voices](../../../language-support.md?tabs=tts#multilingual-voices). You can replace `en-US-AvaMultilingualNeural` with a supported OpenAI voice name such as `en-US-FableMultilingualNeural`.
45+
OpenAI text-to-speech voices are also supported. See [OpenAI text to speech voices in Azure AI Speech](../../../openai-voices.md) and [multilingual voices](../../../language-support.md?tabs=tts#multilingual-voices). You can replace `en-US-AvaMultilingualNeural` with a supported OpenAI voice name such as `en-US-FableMultilingualNeural`.
4646

4747
## Clean up resources
4848

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/cpp.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,13 +16,13 @@ ms.author: eur
1616

1717
## Set up the environment
1818

19-
The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) that implements .NET Standard 2.0. Install the Speech SDK later in this guide. For any requirements, see [Install the Speech SDK](../../../quickstarts/setup-platform.md?pivots=programming-language-cpp).
19+
The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) that implements .NET Standard 2.0. Install the Speech SDK later in this guide. For detailed installation instructions, see [Install the Speech SDK](../../../quickstarts/setup-platform.md?pivots=programming-language-cpp).
2020

2121
### Set environment variables
2222

2323
[!INCLUDE [Environment variables](../../common/environment-variables.md)]
2424

25-
## Synthesize to speaker output
25+
## Send speech to speaker
2626

2727
Follow these steps to create a console application and install the Speech SDK.
2828

@@ -114,7 +114,7 @@ Follow these steps to create a console application and install the Speech SDK.
114114

115115
1. To change the speech synthesis language, replace `en-US-AvaMultilingualNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#prebuilt-neural-voices).
116116

117-
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is "I'm excited to try text to speech" and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
117+
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is *I'm excited to try text to speech* and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
118118

119119
1. [Build and run your new console application](/cpp/build/vscpp-step-2-build) to start speech synthesis to the default speaker.
120120

@@ -137,9 +137,9 @@ This quickstart uses the `SpeakTextAsync` operation to synthesize a short block
137137
- See [how to synthesize speech](~/articles/ai-services/speech-service/how-to-speech-synthesis.md) and [Speech Synthesis Markup Language (SSML) overview](~/articles/ai-services/speech-service/speech-synthesis-markup.md) for information about speech synthesis from a file and finer control over voice styles, prosody, and other settings.
138138
- See [batch synthesis API for text to speech](~/articles/ai-services/speech-service/batch-synthesis.md) for information about synthesizing long-form text to speech.
139139

140-
### OpenAI text to speech voices in Azure AI Speech
140+
### OpenAI text-to-speech voices in Azure AI Speech
141141

142-
OpenAI text to speech voices are also supported. See [OpenAI text to speech voices in Azure AI Speech](../../../openai-voices.md) and [multilingual voices](../../../language-support.md?tabs=tts#multilingual-voices). You can replace `en-US-AvaMultilingualNeural` with a supported OpenAI voice name such as `en-US-FableMultilingualNeural`.
142+
OpenAI text-to-speech voices are also supported. See [OpenAI text to speech voices in Azure AI Speech](../../../openai-voices.md) and [multilingual voices](../../../language-support.md?tabs=tts#multilingual-voices). You can replace `en-US-AvaMultilingualNeural` with a supported OpenAI voice name such as `en-US-FableMultilingualNeural`.
143143

144144
## Clean up resources
145145

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/csharp.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 2/1/2024
5+
ms.date: 8/07/2024
66
ms.author: eur
77
---
88

@@ -16,13 +16,13 @@ ms.author: eur
1616

1717
## Set up the environment
1818

19-
The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) that implements .NET Standard 2.0. Install the Speech SDK later in this guide. For any requirements, see [Install the Speech SDK](../../../quickstarts/setup-platform.md?pivots=programming-language-csharp).
19+
The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) that implements .NET Standard 2.0. Install the Speech SDK later in this guide by using the console. For detailed installation instructions, see [Install the Speech SDK](../../../quickstarts/setup-platform.md?pivots=programming-language-csharp).
2020

2121
### Set environment variables
2222

2323
[!INCLUDE [Environment variables](../../common/environment-variables.md)]
2424

25-
## Synthesize to speaker output
25+
## Send speech to speaker
2626

2727
Follow these steps to create a console application and install the Speech SDK.
2828

@@ -103,7 +103,7 @@ Follow these steps to create a console application and install the Speech SDK.
103103

104104
1. To change the speech synthesis language, replace `en-US-AvaMultilingualNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#prebuilt-neural-voices).
105105

106-
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is "I'm excited to try text to speech" and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
106+
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is *I'm excited to try text to speech* and you set `es-ES-ElviraNeural` as the language, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
107107

108108
1. Run your new console application to start speech synthesis to the default speaker.
109109

@@ -130,9 +130,9 @@ This quickstart uses the `SpeakTextAsync` operation to synthesize a short block
130130
- See [how to synthesize speech](~/articles/ai-services/speech-service/how-to-speech-synthesis.md) and [Speech Synthesis Markup Language (SSML) overview](~/articles/ai-services/speech-service/speech-synthesis-markup.md) for information about speech synthesis from a file and finer control over voice styles, prosody, and other settings.
131131
- See [batch synthesis API for text to speech](~/articles/ai-services/speech-service/batch-synthesis.md) for information about synthesizing long-form text to speech.
132132

133-
### OpenAI text to speech voices in Azure AI Speech
133+
### OpenAI text-to-speech voices in Azure AI Speech
134134

135-
OpenAI text to speech voices are also supported. See [OpenAI text to speech voices in Azure AI Speech](../../../openai-voices.md) and [multilingual voices](../../../language-support.md?tabs=tts#multilingual-voices). You can replace `en-US-AvaMultilingualNeural` with a supported OpenAI voice name such as `en-US-FableMultilingualNeural`.
135+
OpenAI text-to-speech voices are also supported. See [OpenAI text to speech voices in Azure AI Speech](../../../openai-voices.md) and [multilingual voices](../../../language-support.md?tabs=tts#multilingual-voices). You can replace `en-US-AvaMultilingualNeural` with a supported OpenAI voice name such as `en-US-FableMultilingualNeural`.
136136

137137
## Clean up resources
138138

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/go.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,13 +16,13 @@ ms.author: eur
1616

1717
## Set up the environment
1818

19-
Install the Speech SDK for Go. For requirements and instructions, see [Install the Speech SDK](../../../quickstarts/setup-platform.md?pivots=programming-language-go).
19+
Install the Speech SDK for the Go language. For detailed installation instructions, see [Install the Speech SDK](../../../quickstarts/setup-platform.md?pivots=programming-language-go).
2020

2121
### Set environment variables
2222

2323
[!INCLUDE [Environment variables](../../common/environment-variables.md)]
2424

25-
## Synthesize to speaker output
25+
## Send speech to speaker
2626

2727
Follow these steps to create a Go module.
2828

@@ -136,7 +136,7 @@ Follow these steps to create a Go module.
136136

137137
1. To change the speech synthesis language, replace `en-US-AvaMultilingualNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#prebuilt-neural-voices).
138138

139-
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is "I'm excited to try text to speech" and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
139+
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is *I'm excited to try text to speech* and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
140140
141141
1. Run the following commands to create a *go.mod* file that links to components hosted on GitHub:
142142
@@ -157,9 +157,9 @@ Follow these steps to create a Go module.
157157

158158
## Remarks
159159

160-
### OpenAI text to speech voices in Azure AI Speech
160+
### OpenAI text-to-speech voices in Azure AI Speech
161161

162-
OpenAI text to speech voices are also supported. See [OpenAI text to speech voices in Azure AI Speech](../../../openai-voices.md) and [multilingual voices](../../../language-support.md?tabs=tts#multilingual-voices). You can replace `en-US-AvaMultilingualNeural` with a supported OpenAI voice name such as `en-US-FableMultilingualNeural`.
162+
OpenAI text-to-speech voices are also supported. See [OpenAI text to speech voices in Azure AI Speech](../../../openai-voices.md) and [multilingual voices](../../../language-support.md?tabs=tts#multilingual-voices). You can replace `en-US-AvaMultilingualNeural` with a supported OpenAI voice name such as `en-US-FableMultilingualNeural`.
163163

164164
## Clean up resources
165165

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/intro.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.date: 2/1/2024
66
ms.author: eur
77
---
88

9-
In this quickstart, you run an application that does text to speech synthesis.
9+
Learn how to use Azure AI Speech to run an application for text-to-speech synthesis. You can change the voice, enter text to be converted, and listen to the output on your computer's speaker.
1010

1111
> [!TIP]
1212
> You can try text to speech in the [Speech Studio Voice Gallery](https://aka.ms/speechstudio/voicegallery) without signing up or writing any code.

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/java.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ To set up your environment, [install the Speech SDK](~/articles/ai-services/spee
6060

6161
[!INCLUDE [Environment variables](../../common/environment-variables.md)]
6262

63-
## Synthesize to speaker output
63+
## Send speech to speaker
6464

6565
Follow these steps to create a console application for speech recognition.
6666

@@ -117,9 +117,9 @@ Follow these steps to create a console application for speech recognition.
117117

118118
1. To change the speech synthesis language, replace `en-US-AvaMultilingualNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#prebuilt-neural-voices).
119119

120-
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is "I'm excited to try text to speech" and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
120+
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is *I'm excited to try text to speech* and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
121121

122-
1. Run your console application to start speech synthesis to the default speaker.
122+
1. Run your console application to output speech synthesis to the default speaker.
123123

124124
```console
125125
javac SpeechSynthesis.java -cp ".;target\dependency\*"
@@ -145,9 +145,9 @@ This quickstart uses the `SpeakTextAsync` operation to synthesize a short block
145145
- See [how to synthesize speech](~/articles/ai-services/speech-service/how-to-speech-synthesis.md) and [Speech Synthesis Markup Language (SSML) overview](~/articles/ai-services/speech-service/speech-synthesis-markup.md) for information about speech synthesis from a file and finer control over voice styles, prosody, and other settings.
146146
- See [batch synthesis API for text to speech](~/articles/ai-services/speech-service/batch-synthesis.md) for information about synthesizing long-form text to speech.
147147

148-
### OpenAI text to speech voices in Azure AI Speech
148+
### OpenAI text-to-speech voices in Azure AI Speech
149149

150-
OpenAI text to speech voices are also supported. See [OpenAI text to speech voices in Azure AI Speech](../../../openai-voices.md) and [multilingual voices](../../../language-support.md?tabs=tts#multilingual-voices). You can replace `en-US-AvaMultilingualNeural` with a supported OpenAI voice name such as `en-US-FableMultilingualNeural`.
150+
OpenAI text-to-speech voices are also supported. See [OpenAI text to speech voices in Azure AI Speech](../../../openai-voices.md) and [multilingual voices](../../../language-support.md?tabs=tts#multilingual-voices). You can replace `en-US-AvaMultilingualNeural` with a supported OpenAI voice name such as `en-US-FableMultilingualNeural`.
151151

152152
## Clean up resources
153153

0 commit comments

Comments
 (0)