Skip to content

Commit 77f3c90

Browse files
committed
Get started with text to speech using TypeScript
1 parent 935716d commit 77f3c90

File tree

13 files changed

+282
-138
lines changed

13 files changed

+282
-138
lines changed

articles/ai-services/speech-service/get-started-text-to-speech.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,12 +6,12 @@ author: eric-urban
66
manager: nitinme
77
ms.service: azure-ai-speech
88
ms.topic: quickstart
9-
ms.date: 3/10/2025
9+
ms.date: 7/17/2025
1010
ms.author: eur
1111
ms.devlang: cpp
1212
# ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python
1313
ms.custom: devx-track-python, devx-track-js, devx-track-csharp, mode-other, devx-track-extended-java, devx-track-go
14-
zone_pivot_groups: programming-languages-speech-services
14+
zone_pivot_groups: programming-languages-text-to-speech
1515
keywords: text to speech
1616
#customer intent: As a user, I want to create speech output from text by using my choice of technologies which fit into my current processes.
1717
---
@@ -54,8 +54,8 @@ keywords: text to speech
5454
[!INCLUDE [REST include](includes/quickstarts/text-to-speech-basics/rest.md)]
5555
::: zone-end
5656

57-
::: zone pivot="programming-language-cli"
58-
[!INCLUDE [CLI include](includes/quickstarts/text-to-speech-basics/cli.md)]
57+
::: zone pivot="programming-language-typescript"
58+
[!INCLUDE [TypeScript include](includes/quickstarts/text-to-speech-basics/typescript.md)]
5959
::: zone-end
6060

6161
## Next step

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/cli.md

Lines changed: 0 additions & 49 deletions
This file was deleted.

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/cpp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 2/1/2024
5+
ms.date: 7/17/2025
66
ms.author: eur
77
---
88

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/go.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 2/1/2024
5+
ms.date: 7/17/2025
66
ms.author: eur
77
---
88

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/intro.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 2/1/2024
5+
ms.date: 7/17/2025
66
ms.author: eur
77
---
88

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/java.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 2/1/2024
5+
ms.date: 7/17/2025
66
ms.author: eur
77
---
88

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/javascript.md

Lines changed: 75 additions & 77 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 2/1/2024
5+
ms.date: 7/17/2025
66
ms.author: eur
77
---
88

@@ -14,98 +14,96 @@ ms.author: eur
1414

1515
[!INCLUDE [Prerequisites](../../common/azure-prerequisites.md)]
1616

17-
## Set up the environment
17+
## Set up
1818

19-
To set up your environment, install the Speech SDK for JavaScript. If you just want the package name to install, run `npm install microsoft-cognitiveservices-speech-sdk`. For detailed installation instructions, see [Install the Speech SDK](../../../quickstarts/setup-platform.md?pivots=programming-language-javascript).
19+
1. Create a new folder `translation-quickstart` and go to the quickstart folder with the following command:
2020

21-
### Set environment variables
21+
```shell
22+
mkdir translation-quickstart && cd translation-quickstart
23+
```
24+
25+
1. Create the `package.json` with the following command:
2226

23-
[!INCLUDE [Environment variables](../../common/environment-variables.md)]
27+
```shell
28+
npm init -y
29+
```
2430

25-
## Create the application
31+
1. Install the Speech SDK for JavaScript with:
2632

27-
Follow these steps to create a Node.js console application for speech synthesis.
33+
```console
34+
npm install microsoft-cognitiveservices-speech-sdk
35+
```
2836

29-
1. Open a console window where you want the new project, and create a file named *SpeechSynthesis.js*.
30-
1. Install the Speech SDK for JavaScript:
37+
### Retrieve resource information
3138

32-
```console
33-
npm install microsoft-cognitiveservices-speech-sdk
34-
```
35-
36-
1. Copy the following code into *SpeechSynthesis.js*:
37-
38-
```javascript
39-
(function() {
40-
41-
"use strict";
42-
43-
var sdk = require("microsoft-cognitiveservices-speech-sdk");
44-
var readline = require("readline");
45-
46-
var audioFile = "YourAudioFile.wav";
47-
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
48-
const speechConfig = sdk.SpeechConfig.fromSubscription(process.env.SPEECH_KEY, process.env.SPEECH_REGION);
49-
const audioConfig = sdk.AudioConfig.fromAudioFileOutput(audioFile);
50-
51-
// The language of the voice that speaks.
52-
speechConfig.speechSynthesisVoiceName = "en-US-AvaMultilingualNeural";
53-
54-
// Create the speech synthesizer.
55-
var synthesizer = new sdk.SpeechSynthesizer(speechConfig, audioConfig);
56-
57-
var rl = readline.createInterface({
58-
input: process.stdin,
59-
output: process.stdout
60-
});
61-
62-
rl.question("Enter some text that you want to speak >\n> ", function (text) {
63-
rl.close();
64-
// Start the synthesizer and wait for a result.
65-
synthesizer.speakTextAsync(text,
66-
function (result) {
67-
if (result.reason === sdk.ResultReason.SynthesizingAudioCompleted) {
68-
console.log("synthesis finished.");
69-
} else {
70-
console.error("Speech synthesis canceled, " + result.errorDetails +
71-
"\nDid you set the speech resource key and region values?");
72-
}
73-
synthesizer.close();
74-
synthesizer = null;
75-
},
76-
function (err) {
77-
console.trace("err - " + err);
78-
synthesizer.close();
79-
synthesizer = null;
80-
});
81-
console.log("Now synthesizing to: " + audioFile);
82-
});
83-
}());
84-
```
85-
86-
1. In *SpeechSynthesis.js*, optionally you can rename *YourAudioFile.wav* to another output file name.
87-
88-
1. To change the speech synthesis language, replace `en-US-AvaMultilingualNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#standard-voices).
39+
[!INCLUDE [Environment variables](../../common/environment-variables.md)]
8940

90-
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is *I'm excited to try text to speech* and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
41+
## Synthesize speech to a file
42+
43+
To translate speech from a file:
44+
45+
1. Create a new file named *synthesis.js* with the following content:
46+
47+
```javascript
48+
import { createInterface } from "readline";
49+
import { SpeechConfig, AudioConfig, SpeechSynthesizer, ResultReason } from "microsoft-cognitiveservices-speech-sdk";
50+
function synthesizeSpeech() {
51+
const audioFile = "YourAudioFile.wav";
52+
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
53+
const speechConfig = SpeechConfig.fromSubscription(process.env.SPEECH_KEY, process.env.SPEECH_REGION);
54+
const audioConfig = AudioConfig.fromAudioFileOutput(audioFile);
55+
// The language of the voice that speaks.
56+
speechConfig.speechSynthesisVoiceName = "en-US-AvaMultilingualNeural";
57+
// Create the speech synthesizer.
58+
const synthesizer = new SpeechSynthesizer(speechConfig, audioConfig);
59+
const rl = createInterface({
60+
input: process.stdin,
61+
output: process.stdout
62+
});
63+
rl.question("Enter some text that you want to speak >\n> ", function (text) {
64+
rl.close();
65+
// Start the synthesizer and wait for a result.
66+
synthesizer.speakTextAsync(text, function (result) {
67+
if (result.reason === ResultReason.SynthesizingAudioCompleted) {
68+
console.log("synthesis finished.");
69+
}
70+
else {
71+
console.error("Speech synthesis canceled, " + result.errorDetails +
72+
"\nDid you set the speech resource key and region values?");
73+
}
74+
synthesizer.close();
75+
}, function (err) {
76+
console.trace("err - " + err);
77+
synthesizer.close();
78+
});
79+
console.log("Now synthesizing to: " + audioFile);
80+
});
81+
}
82+
synthesizeSpeech();
83+
```
84+
85+
In *synthesis.js*, optionally you can rename *YourAudioFile.wav* to another output file name.
86+
87+
To change the speech synthesis language, replace `en-US-AvaMultilingualNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#standard-voices).
88+
89+
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is *I'm excited to try text to speech* and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
9190
9291
1. Run your console application to start speech synthesis to a file:
9392
9493
```console
95-
node SpeechSynthesis.js
94+
node synthesis.js
9695
```
9796
98-
> [!IMPORTANT]
99-
> Make sure that you set the `SPEECH_KEY` and `SPEECH_REGION` [environment variables](#set-environment-variables). If you don't set these variables, the sample fails with an error message.
97+
## Output
10098
101-
1. The provided text should be in an audio file:
99+
You should see the following output in the console. Follow the prompt to enter text that you want to synthesize:
102100
103-
```output
104-
Enter some text that you want to speak >
105-
> I'm excited to try text to speech
106-
Now synthesizing to: YourAudioFile.wav
107-
synthesis finished.
108-
```
101+
```console
102+
Enter some text that you want to speak >
103+
> I'm excited to try text to speech
104+
Now synthesizing to: YourAudioFile.wav
105+
synthesis finished.
106+
```
109107
110108
## Remarks
111109

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/objectivec.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 2/1/2024
5+
ms.date: 7/17/2025
66
ms.author: eur
77
---
88

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 2/1/2024
5+
ms.date: 7/17/2025
66
ms.author: eur
77
---
88

articles/ai-services/speech-service/includes/quickstarts/text-to-speech-basics/rest.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 2/1/2024
5+
ms.date: 7/17/2025
66
ms.author: eur
77
---
88

0 commit comments

Comments
 (0)