Skip to content

Commit ac039ed

Browse files
authored
Merge pull request #269646 from MicrosoftDocs/main
3/20/2024 AM Publish
2 parents 9f83988 + 6d5d1d5 commit ac039ed

File tree

95 files changed

+741
-256
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

95 files changed

+741
-256
lines changed

articles/ai-services/computer-vision/includes/how-to-guides/analyze-image-40-rest.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ You can also do image analysis with a custom trained model. To create and train
7474

7575
To use a custom model, don't use the features query parameter. Instead, set the `model-name` parameter to the name of your model as shown here. Replace `MyCustomModelName` with your custom model name.
7676

77-
`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2024-02-01&model-name=MyCustomModelName`
77+
`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01&model-name=MyCustomModelName`
7878

7979

8080
### Specify languages

articles/ai-services/speech-service/batch-synthesis-properties.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Batch synthesis properties are described in the following table.
3131
|`description`|The description of the batch synthesis.<br/><br/>This property is optional.|
3232
|`displayName`|The name of the batch synthesis. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.|
3333
|`id`|The batch synthesis job ID.<br/><br/>This property is read-only.|
34-
|`inputs`|The plain text or SSML to be synthesized.<br/><br/>When the `textType` is set to `"PlainText"`, provide plain text as shown here: `"inputs": [{"text": "The rainbow has seven colors."}]`. When the `textType` is set to `"SSML"`, provide text in the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) as shown here: `"inputs": [{"text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''><voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-AvaNeural'\''>The rainbow has seven colors.</voice></speak>"}]`.<br/><br/>Include up to 1,000 text objects if you want multiple audio output files. Here's example input text that should be synthesized to two audio output files: `"inputs": [{"text": "synthesize this to a file"},{"text": "synthesize this to another file"}]`. However, if the `properties.concatenateResult` property is set to `true`, then each synthesized result is written to the same audio output file.<br/><br/>You don't need separate text inputs for new paragraphs. Within any of the (up to 1,000) text inputs, you can specify new paragraphs using the "\r\n" (newline) string. Here's example input text with two paragraphs that should be synthesized to the same audio output file: `"inputs": [{"text": "synthesize this to a file\r\nsynthesize this to another paragraph in the same file"}]`<br/><br/>There are no paragraph limits, but the maximum JSON payload size (including all text inputs and other properties) is 500 kilobytes.<br/><br/>This property is required when you create a new batch synthesis job. This property isn't included in the response when you get the synthesis job.|
34+
|`inputs`|The plain text or SSML to be synthesized.<br/><br/>When the `textType` is set to `"PlainText"`, provide plain text as shown here: `"inputs": [{"text": "The rainbow has seven colors."}]`. When the `textType` is set to `"SSML"`, provide text in the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) as shown here: `"inputs": [{"text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''><voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-AvaMultilingualNeural'\''>The rainbow has seven colors.</voice></speak>"}]`.<br/><br/>Include up to 1,000 text objects if you want multiple audio output files. Here's example input text that should be synthesized to two audio output files: `"inputs": [{"text": "synthesize this to a file"},{"text": "synthesize this to another file"}]`. However, if the `properties.concatenateResult` property is set to `true`, then each synthesized result is written to the same audio output file.<br/><br/>You don't need separate text inputs for new paragraphs. Within any of the (up to 1,000) text inputs, you can specify new paragraphs using the "\r\n" (newline) string. Here's example input text with two paragraphs that should be synthesized to the same audio output file: `"inputs": [{"text": "synthesize this to a file\r\nsynthesize this to another paragraph in the same file"}]`<br/><br/>There are no paragraph limits, but the maximum JSON payload size (including all text inputs and other properties) is 500 kilobytes.<br/><br/>This property is required when you create a new batch synthesis job. This property isn't included in the response when you get the synthesis job.|
3535
|`lastActionDateTime`|The most recent date and time when the `status` property value changed.<br/><br/>This property is read-only.|
3636
|`outputs.result`|The location of the batch synthesis result files with audio output and logs.<br/><br/>This property is read-only.|
3737
|`properties`|A defined set of optional batch synthesis configuration settings.|

articles/ai-services/speech-service/how-to-audio-content-creation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ You can get your content into the Audio Content Creation tool in either of two w
129129
130130
```xml
131131
<speak xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" version="1.0" xml:lang="en-US">
132-
<voice name="en-US-AvaNeural">
132+
<voice name="en-US-AvaMultilingualNeural">
133133
Welcome to use Audio Content Creation <break time="10ms" />to customize audio output for your products.
134134
</voice>
135135
</speak>

articles/ai-services/speech-service/includes/how-to/speech-synthesis/cpp.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ void synthesizeSpeech()
2222
auto speechConfig = SpeechConfig::FromSubscription("YourSpeechKey", "YourSpeechRegion");
2323
// Set either the `SpeechSynthesisVoiceName` or `SpeechSynthesisLanguage`.
2424
speechConfig->SetSpeechSynthesisLanguage("en-US");
25-
speechConfig->SetSpeechSynthesisVoiceName("en-US-AvaNeural");
25+
speechConfig->SetSpeechSynthesisVoiceName("en-US-AvaMultilingualNeural");
2626
}
2727
```
2828

@@ -157,7 +157,7 @@ To start using SSML for customization, make a minor change that switches the voi
157157

158158
```xml
159159
<speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
160-
<voice name="en-US-AvaNeural">
160+
<voice name="en-US-AvaMultilingualNeural">
161161
When you're on the freeway, it's a good idea to use a GPS.
162162
</voice>
163163
</speak>
@@ -188,7 +188,7 @@ To start using SSML for customization, make a minor change that switches the voi
188188
```
189189
190190
> [!NOTE]
191-
> To change the voice without using SSML, you can set the property on `SpeechConfig` by using `SpeechConfig.SetSpeechSynthesisVoiceName("en-US-ChristopherNeural")`.
191+
> To change the voice without using SSML, you can set the property on `SpeechConfig` by using `SpeechConfig.SetSpeechSynthesisVoiceName("en-US-AndrewMultilingualNeural")`.
192192
193193
## Subscribe to synthesizer events
194194
@@ -227,7 +227,7 @@ int main()
227227
speechConfig->SetProperty(PropertyId::SpeechServiceResponse_RequestSentenceBoundary, "true");
228228
229229
const auto ssml = R"(<speak version='1.0' xml:lang='en-US' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts'>
230-
<voice name = 'en-US-AvaNeural'>
230+
<voice name = 'en-US-AvaMultilingualNeural'>
231231
<mstts:viseme type = 'redlips_front' />
232232
The rainbow has seven colors : <bookmark mark = 'colors_list_begin' />Red, orange, yellow, green, blue, indigo, and violet.<bookmark mark = 'colors_list_end' />.
233233
</voice>

articles/ai-services/speech-service/includes/how-to/speech-synthesis/csharp.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ static async Task SynthesizeAudioAsync()
2323
var speechConfig = SpeechConfig.FromSubscription("YourSpeechKey", "YourSpeechRegion");
2424
// Set either the `SpeechSynthesisVoiceName` or `SpeechSynthesisLanguage`.
2525
speechConfig.SpeechSynthesisLanguage = "en-US";
26-
speechConfig.SpeechSynthesisVoiceName = "en-US-AvaNeural";
26+
speechConfig.SpeechSynthesisVoiceName = "en-US-AvaMultilingualNeural";
2727
}
2828
```
2929

@@ -160,7 +160,7 @@ To start using SSML for customization, you make a minor change that switches the
160160

161161
```xml
162162
<speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
163-
<voice name="en-US-AvaNeural">
163+
<voice name="en-US-AvaMultilingualNeural">
164164
When you're on the freeway, it's a good idea to use a GPS.
165165
</voice>
166166
</speak>
@@ -188,7 +188,7 @@ To start using SSML for customization, you make a minor change that switches the
188188
```
189189

190190
> [!NOTE]
191-
> To change the voice without using SSML, you can set the property on `SpeechConfig` by using `SpeechConfig.SpeechSynthesisVoiceName = "en-US-AvaNeural";`.
191+
> To change the voice without using SSML, you can set the property on `SpeechConfig` by using `SpeechConfig.SpeechSynthesisVoiceName = "en-US-AvaMultilingualNeural";`.
192192
193193
## Subscribe to synthesizer events
194194

@@ -213,7 +213,7 @@ class Program
213213
{
214214
var speechConfig = SpeechConfig.FromSubscription(speechKey, speechRegion);
215215

216-
var speechSynthesisVoiceName = "en-US-AvaNeural";
216+
var speechSynthesisVoiceName = "en-US-AvaMultilingualNeural";
217217
var ssml = @$"<speak version='1.0' xml:lang='en-US' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts'>
218218
<voice name='{speechSynthesisVoiceName}'>
219219
<mstts:viseme type='redlips_front'/>

articles/ai-services/speech-service/includes/how-to/speech-synthesis/go.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -293,7 +293,7 @@ if err != nil {
293293
defer speechConfig.Close()
294294

295295
speechConfig.SetSpeechSynthesisLanguage("en-US")
296-
speechConfig.SetSpeechSynthesisVoiceName("en-US-AvaNeural")
296+
speechConfig.SetSpeechSynthesisVoiceName("en-US-AvaMultilingualNeural")
297297
```
298298

299299
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is, "I'm excited to try text to speech," and you select `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent.
@@ -320,7 +320,7 @@ First, create a new XML file for the SSML configuration in your root project dir
320320

321321
```xml
322322
<speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
323-
<voice name="en-US-AvaNeural">
323+
<voice name="en-US-AvaMultilingualNeural">
324324
When you're on the freeway, it's a good idea to use a GPS.
325325
</voice>
326326
</speak>
@@ -329,7 +329,7 @@ First, create a new XML file for the SSML configuration in your root project dir
329329
Next, you need to change the speech synthesis request to reference your XML file. The request is mostly the same, but instead of using the `SpeakTextAsync()` function, you use `SpeakSsmlAsync()`. This function expects an XML string, so you first load your SSML configuration as a string. From this point, the result object is exactly the same as previous examples.
330330

331331
> [!NOTE]
332-
> To set the voice without using SSML, you can set the property on `SpeechConfig` by using `speechConfig.SetSpeechSynthesisVoiceName("en-US-AvaNeural")`.
332+
> To set the voice without using SSML, you can set the property on `SpeechConfig` by using `speechConfig.SetSpeechSynthesisVoiceName("en-US-AvaMultilingualNeural")`.
333333
334334
## Subscribe to synthesizer events
335335

@@ -445,7 +445,7 @@ func main() {
445445
speechSynthesizer.VisemeReceived(visemeReceivedHandler)
446446
speechSynthesizer.WordBoundary(wordBoundaryHandler)
447447

448-
speechSynthesisVoiceName := "en-US-AvaNeural"
448+
speechSynthesisVoiceName := "en-US-AvaMultilingualNeural"
449449

450450
ssml := fmt.Sprintf(`<speak version='1.0' xml:lang='en-US' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts'>
451451
<voice name='%s'>

articles/ai-services/speech-service/includes/how-to/speech-synthesis/java.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ public static void main(String[] args) {
2323
SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSpeechKey", "YourSpeechRegion");
2424
// Set either the `SpeechSynthesisVoiceName` or `SpeechSynthesisLanguage`.
2525
speechConfig.setSpeechSynthesisLanguage("en-US");
26-
speechConfig.setSpeechSynthesisVoiceName("en-US-AvaNeural");
26+
speechConfig.setSpeechSynthesisVoiceName("en-US-AvaMultilingualNeural");
2727
}
2828
```
2929

@@ -160,7 +160,7 @@ To start using SSML for customization, you make a minor change that switches the
160160

161161
```xml
162162
<speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
163-
<voice name="en-US-AvaNeural">
163+
<voice name="en-US-AvaMultilingualNeural">
164164
When you're on the freeway, it's a good idea to use a GPS.
165165
</voice>
166166
</speak>
@@ -201,7 +201,7 @@ To start using SSML for customization, you make a minor change that switches the
201201
```
202202

203203
> [!NOTE]
204-
> To change the voice without using SSML, set the property on `SpeechConfig` by using `SpeechConfig.setSpeechSynthesisVoiceName("en-US-AvaNeural");`.
204+
> To change the voice without using SSML, set the property on `SpeechConfig` by using `SpeechConfig.setSpeechSynthesisVoiceName("en-US-AvaMultilingualNeural");`.
205205
206206
## Subscribe to synthesizer events
207207

@@ -232,7 +232,7 @@ public class SpeechSynthesis {
232232
// Required for WordBoundary event sentences.
233233
speechConfig.setProperty(PropertyId.SpeechServiceResponse_RequestSentenceBoundary, "true");
234234

235-
String speechSynthesisVoiceName = "en-US-AvaNeural";
235+
String speechSynthesisVoiceName = "en-US-AvaMultilingualNeural";
236236

237237
String ssml = String.format("<speak version='1.0' xml:lang='en-US' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts'>"
238238
.concat(String.format("<voice name='%s'>", speechSynthesisVoiceName))

articles/ai-services/speech-service/includes/how-to/speech-synthesis/javascript.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ function synthesizeSpeech() {
2222
const speechConfig = sdk.SpeechConfig.fromSubscription("YourSpeechKey", "YourSpeechRegion");
2323
// Set either the `SpeechSynthesisVoiceName` or `SpeechSynthesisLanguage`.
2424
speechConfig.speechSynthesisLanguage = "en-US";
25-
speechConfig.speechSynthesisVoiceName = "en-US-AvaNeural";
25+
speechConfig.speechSynthesisVoiceName = "en-US-AvaMultilingualNeural";
2626
}
2727

2828
synthesizeSpeech();
@@ -292,7 +292,7 @@ To start using SSML for customization, you make a minor change that switches the
292292

293293
```xml
294294
<speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
295-
<voice name="en-US-AvaNeural">
295+
<voice name="en-US-AvaMultilingualNeural">
296296
When you're on the freeway, it's a good idea to use a GPS.
297297
</voice>
298298
</speak>
@@ -338,7 +338,7 @@ To start using SSML for customization, you make a minor change that switches the
338338
```
339339

340340
> [!NOTE]
341-
> To change the voice without using SSML, you can set the property on `SpeechConfig` by using `SpeechConfig.speechSynthesisVoiceName = "en-US-AvaNeural";`.
341+
> To change the voice without using SSML, you can set the property on `SpeechConfig` by using `SpeechConfig.speechSynthesisVoiceName = "en-US-AvaMultilingualNeural";`.
342342
343343
## Subscribe to synthesizer events
344344

@@ -362,7 +362,7 @@ Here's an example that shows how to subscribe to events for speech synthesis. Yo
362362
const speechConfig = sdk.SpeechConfig.fromSubscription(process.env.SPEECH_KEY, process.env.SPEECH_REGION);
363363
const audioConfig = sdk.AudioConfig.fromAudioFileOutput(audioFile);
364364

365-
var speechSynthesisVoiceName = "en-US-AvaNeural";
365+
var speechSynthesisVoiceName = "en-US-AvaMultilingualNeural";
366366
var ssml = `<speak version='1.0' xml:lang='en-US' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts'> \r\n \
367367
<voice name='${speechSynthesisVoiceName}'> \r\n \
368368
<mstts:viseme type='redlips_front'/> \r\n \

articles/ai-services/speech-service/includes/how-to/speech-synthesis/python.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Specify the language or voice of `SpeechConfig` to match your input text and use
1919
```python
2020
# Set either the `SpeechSynthesisVoiceName` or `SpeechSynthesisLanguage`.
2121
speech_config.speech_synthesis_language = "en-US"
22-
speech_config.speech_synthesis_voice_name ="en-US-AvaNeural"
22+
speech_config.speech_synthesis_voice_name ="en-US-AvaMultilingualNeural"
2323
```
2424

2525
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is, "I'm excited to try text to speech," and you select `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent.
@@ -129,7 +129,7 @@ To start using SSML for customization, make a minor change that switches the voi
129129

130130
```xml
131131
<speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
132-
<voice name="en-US-AvaNeural">
132+
<voice name="en-US-AvaMultilingualNeural">
133133
When you're on the freeway, it's a good idea to use a GPS.
134134
</voice>
135135
</speak>
@@ -153,7 +153,7 @@ To start using SSML for customization, make a minor change that switches the voi
153153
```
154154

155155
> [!NOTE]
156-
> To change the voice without using SSML, you can set the property on `SpeechConfig` by using `speech_config.speech_synthesis_voice_name = "en-US-AvaNeural"`.
156+
> To change the voice without using SSML, you can set the property on `SpeechConfig` by using `speech_config.speech_synthesis_voice_name = "en-US-AvaMultilingualNeural"`.
157157
158158
## Subscribe to synthesizer events
159159

@@ -222,7 +222,7 @@ speech_synthesizer.viseme_received.connect(speech_synthesizer_viseme_received_cb
222222
speech_synthesizer.synthesis_word_boundary.connect(speech_synthesizer_word_boundary_cb)
223223

224224
# The language of the voice that speaks.
225-
speech_synthesis_voice_name='en-US-AvaNeural'
225+
speech_synthesis_voice_name='en-US-AvaMultilingualNeural'
226226

227227
ssml = """<speak version='1.0' xml:lang='en-US' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts'>
228228
<voice name='{}'>

articles/ai-services/speech-service/includes/how-to/speech-synthesis/rest.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ curl --location --request POST 'https://YOUR_RESOURCE_REGION.tts.speech.microsof
3434
--header 'X-Microsoft-OutputFormat: audio-16khz-128kbitrate-mono-mp3' \
3535
--header 'User-Agent: curl' \
3636
--data-raw '<speak version='\''1.0'\'' xml:lang='\''en-US'\''>
37-
<voice name='\''en-US-AvaNeural'\''>
37+
<voice name='\''en-US-AvaMultilingualNeural'\''>
3838
I am excited to try text to speech
3939
</voice>
4040
</speak>' > output.mp3

0 commit comments

Comments
 (0)