You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/language-support.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -201,6 +201,14 @@ You can also get a full list of languages and voices supported for each specific
201
201
> [!IMPORTANT]
202
202
> Pricing varies for Prebuilt Neural Voice (referred to as *Neural* on the pricing page) and Custom Neural Voice (referred to as *Custom Neural* on the pricing page). For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page.
203
203
204
+
#### Custom Neural Voice
205
+
206
+
Custom Neural Voice lets you create synthetic voices that are rich in speaking styles. You can create a unique brand voice in multiple languages and styles by using a small set of recording data. There are two Custom Neural Voice (CNV) project types: CNV Pro and CNV Lite (preview).
207
+
208
+
Select the right locale that matches your training data to train a custom neural voice model. For example, if the recording data is spoken in English with a British accent, select `en-GB`.
209
+
210
+
With the cross-lingual feature (preview), you can transfer your custom neural voice model to speak a second language. For example, with the `zh-CN` data, you can create a voice that speaks `en-AU` or any of the languages with Cross-lingual support.
211
+
204
212
#### Prebuilt neural voices
205
213
206
214
Prebuilt neural voices are created from samples that use a 24-khz sample rate. All voices can upsample or downsample to other sample rates when synthesizing.
@@ -210,14 +218,6 @@ Please note that the following neural voices are retired.
210
218
- The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021. If you're using container Neural TTS, [download](speech-container-howto.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30, 2021, all requests with previous versions will not succeed.
211
219
- The `en-US-JessaNeural` voice is retired and replaced by `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria."
212
220
213
-
#### Custom Neural Voice
214
-
215
-
Custom Neural Voice lets you create synthetic voices that are rich in speaking styles. You can create a unique brand voice in multiple languages and styles by using a small set of recording data. There are two Custom Neural Voice (CNV) project types: CNV Pro and CNV Lite (preview).
216
-
217
-
Select the right locale that matches your training data to train a custom neural voice model. For example, if the recording data is spoken in English with a British accent, select `en-GB`.
218
-
219
-
With the cross-lingual feature (preview), you can transfer your custom neural voice model to speak a second language. For example, with the `zh-CN` data, you can create a voice that speaks `en-AU` or any of the languages with Cross-lingual support.
220
-
221
221
#### Voice styles and roles
222
222
223
223
In some cases, you can adjust the speaking style to express different emotions like cheerfulness, empathy, and calm. You can optimize the voice for different scenarios like customer service, newscast, and voice assistant. With roles, the same voice can act as a different age and gender.
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/multi-device-conversation.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ Whereas [Conversation Transcription](conversation-transcription.md) works on a s
36
36
## Key features
37
37
38
38
-**Real-time transcription:** Everyone will receive a transcript of the conversation, so they can follow along the text in real-time or save it for later.
39
-
-**Real-time translation:** With more than 70 [supported languages](language-support.md#text-languages) for text translation, users can translate the conversation to their preferred language(s).
39
+
-**Real-time translation:** With more than 70 [supported languages](language-support.md#translate-to-text-language) for text translation, users can translate the conversation to their preferred language(s).
40
40
-**Readable transcripts:** The transcription and translation are easy to follow, with punctuation and sentence breaks.
41
41
-**Voice or text input:** Each user can speak or type on their own device, depending on the language support capabilities enabled for the participant's chosen language. Please refer to [Language support](language-support.md#speech-to-text).
42
42
-**Message relay:** The multi-device conversation service will distribute messages sent by one client to all the others, in the language(s) of their choice.
0 commit comments