You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
regularly from the service, after you retrieve the results. Alternatively, set the `timeToLive` property to ensure the eventual deletion of the results.
110
110
111
+
> [!TIP]
112
+
> You can also try the Batch Transcription API using Python on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/batch/python/python-client/main.py).
113
+
114
+
111
115
::: zone-end
112
116
113
117
::: zone pivot="speech-cli"
@@ -168,7 +172,7 @@ spx help batch transcription
168
172
169
173
::: zone pivot="rest-api"
170
174
171
-
Here are some property options that you can use to configure a transcription when you call the [Transcriptions_Create](/rest/api/speechtotext/transcriptions/create) operation.
175
+
Here are some property options to configure a transcription when you call the [Transcriptions_Create](/rest/api/speechtotext/transcriptions/create) operation. You can find more examples on the same page, such as [creating a transcription with language identification](/rest/api/speechtotext/transcriptions/create/#create-a-transcription-with-language-identification).
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/how-to/speech-synthesis/python.md
+17-7Lines changed: 17 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,6 +36,15 @@ The voice that speaks is determined in order of priority as follows:
36
36
- If both `SpeechSynthesisVoiceName` and `SpeechSynthesisLanguage` are set, the `SpeechSynthesisLanguage` setting is ignored. The voice that you specify by using `SpeechSynthesisVoiceName` speaks.
37
37
- If the voice element is set by using [Speech Synthesis Markup Language (SSML)](../../../speech-synthesis-markup.md), the `SpeechSynthesisVoiceName` and `SpeechSynthesisLanguage` settings are ignored.
38
38
39
+
In summary, the order of priority can be described as:
| ✔ | ✔ | ✗ | The voice that you specify by using `SpeechSynthesisVoiceName` speaks. |
46
+
| ✔| ✔ | ✔ | The voice that you specify by using SSML speaks. |
47
+
39
48
## Synthesize speech to a file
40
49
41
50
Create a [SpeechSynthesizer](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechsynthesizer) object. This object runs text to speech conversions and outputs to speakers, files, or other output streams. `SpeechSynthesizer` accepts as parameters:
@@ -53,7 +62,8 @@ Create a [SpeechSynthesizer](/python/api/azure-cognitiveservices-speech/azure.co
speech_synthesizer.speak_text_async("I'm excited to try text to speech")
65
+
speech_synthesis_result = speech_synthesizer.speak_text_async("I'm excited to try text to speech").get()
66
+
57
67
```
58
68
59
69
When you run the program, it creates a synthesized *.wav* file, which is written to the location that you specify. This result is a good example of the most basic usage. Next, you can customize output and handle the output response as an in-memory stream for working with custom scenarios.
@@ -85,8 +95,8 @@ In this example, use the `AudioDataStream` constructor to get a stream from the
0 commit comments