You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/includes/how-to/text-to-speech-basics/text-to-speech-basics-python.md
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
author: eric-urban
3
3
ms.service: cognitive-services
4
4
ms.topic: include
5
-
ms.date: 07/02/2021
5
+
ms.date: 01/16/2022
6
6
ms.author: eur
7
7
---
8
8
@@ -43,7 +43,7 @@ There are a few ways that you can initialize a [`SpeechConfig`](/python/api/azur
43
43
In this example, you create a [`SpeechConfig`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig) using a speech key and location/region. Get these credentials by following steps in [Try the Speech service for free](../../../overview.md#try-the-speech-service-for-free).
Next, instantiate a `SpeechSynthesizer` by passing your `speech_config` object and the `audio_config` object as params. Then, executing speech synthesis and writing to a file is as simple as running `speak_text_async()` with a string of text.
synthesizer.speak_text_async("A simple test to write to a file.")
78
78
```
79
79
@@ -84,7 +84,7 @@ Run the program, and a synthesized `.wav` file is written to the location you sp
84
84
In some cases, you may want to directly output synthesized speech directly to a speaker. To do this, use the example in the previous section, but change the `AudioOutputConfig` by removing the `filename` param, and set `use_default_speaker=True`. This outputs to the current active output device.
@@ -103,7 +103,7 @@ It's simple to make this change from the previous example. First, remove the `Au
103
103
This time, you save the result to a [`SpeechSynthesisResult`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesisresult) variable. The `audio_data` property contains a `bytes` object of the output data. You can work with this object manually, or you can use the [`AudioDataStream`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audiodatastream) class to manage the in-memory stream. In this example you use the `AudioDataStream` constructor to get a stream from the result.
result = synthesizer.speak_ssml_async(ssml_string).get()
165
165
166
-
stream = AudioDataStream(result)
166
+
stream =speechsdk.AudioDataStream(result)
167
167
stream.save_to_wav_file("path/to/write/file.wav")
168
168
```
169
169
170
170
> [!NOTE]
171
-
> To change the voice without using SSML, you can set the property on the `SpeechConfig` by using `SpeechConfig.speech_synthesis_voice_name = "en-US-JennyNeural"`
171
+
> To change the voice without using SSML, you can set the property on the `SpeechConfig` by using `speech_config.speech_synthesis_voice_name = "en-US-JennyNeural"`
0 commit comments