You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To set up your environment, install the Speech SDK for JavaScript. If you just want the package name to install, run `npm install microsoft-cognitiveservices-speech-sdk`. For detailed installation instructions, see [Install the Speech SDK](../../../quickstarts/setup-platform.md?pivots=programming-language-javascript).
19
+
1. Create a new folder `synthesis-quickstart` and go to the quickstart folder with the following command:
"\nDid you set the speech resource key and region values?");
72
-
}
73
-
synthesizer.close();
74
-
synthesizer =null;
75
-
},
76
-
function (err) {
77
-
console.trace("err - "+ err);
78
-
synthesizer.close();
79
-
synthesizer =null;
80
-
});
81
-
console.log("Now synthesizing to: "+ audioFile);
82
-
});
83
-
}());
84
-
```
85
-
86
-
1. In *SpeechSynthesis.js*, optionally you can rename *YourAudioFile.wav* to another output file name.
87
-
88
-
1. To change the speech synthesis language, replace `en-US-AvaMultilingualNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#standard-voices).
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is *I'm excited to try text to speech* and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
41
+
## Synthesize speech to a file
42
+
43
+
To translate speech from a file:
44
+
45
+
1. Create a new file named *synthesis.js* with the following content:
46
+
47
+
```javascript
48
+
import { createInterface } from "readline";
49
+
import { SpeechConfig, AudioConfig, SpeechSynthesizer, ResultReason } from "microsoft-cognitiveservices-speech-sdk";
50
+
functionsynthesizeSpeech() {
51
+
const audioFile = "YourAudioFile.wav";
52
+
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
"\nDid you set the speech resource key and region values?");
73
+
}
74
+
synthesizer.close();
75
+
}, function(err) {
76
+
console.trace("err - " + err);
77
+
synthesizer.close();
78
+
});
79
+
console.log("Now synthesizing to: " + audioFile);
80
+
});
81
+
}
82
+
synthesizeSpeech();
83
+
```
84
+
85
+
In *synthesis.js*, optionally you can rename *YourAudioFile.wav* to another output file name.
86
+
87
+
To change the speech synthesis language, replace `en-US-AvaMultilingualNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#standard-voices).
88
+
89
+
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is *I'm excited to try text to speech* and you set `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent. If the voice doesn't speak the language of the input text, the Speech service doesn't output synthesized audio.
91
90
92
91
1. Run your console application to start speech synthesis to a file:
93
92
94
93
```console
95
-
node SpeechSynthesis.js
94
+
node synthesis.js
96
95
```
97
96
98
-
> [!IMPORTANT]
99
-
> Make sure that you set the `SPEECH_KEY` and `SPEECH_REGION`[environment variables](#set-environment-variables). If you don't set these variables, the sample fails with an error message.
97
+
## Output
100
98
101
-
1. The provided text should be in an audio file:
99
+
You should see the following output in the console. Follow the prompt to enter text that you want to synthesize:
0 commit comments