Skip to content

Commit 2db01ed

Browse files
committed
Merge branch 'main' into release-migrate-postgresql-single-to-flexible
2 parents b45f15d + 880bd82 commit 2db01ed

File tree

302 files changed

+3723
-1999
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

302 files changed

+3723
-1999
lines changed

.openpublishing.redirection.defender-for-cloud.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -940,6 +940,11 @@
940940
"redirect_url": "/azure/defender-for-cloud/agentless-vulnerability-assessment-aws",
941941
"redirect_document_id": true
942942
},
943+
{
944+
"source_path_from_root": "/articles/defender-for-cloud/custom-security-policies.md",
945+
"redirect_url": "/azure/defender-for-cloud/create-custom-recommendations",
946+
"redirect_document_id": false
947+
},
943948
{
944949
"source_path_from_root": "/articles/defender-for-cloud/how-to-migrate-to-built-in.md",
945950
"redirect_url": "/azure/defender-for-cloud/how-to-transition-to-built-in",

.openpublishing.redirection.json

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10979,6 +10979,17 @@
1097910979
"redirect_document_id": false
1098010980
},
1098110981
{
10982+
"source_path_from_root": "/articles/notification-hubs/notification-hubs-high-availability.md",
10983+
"redirect_url": "/azure/reliability/reliability-notification-hubs",
10984+
"redirect_document_id": false
10985+
},
10986+
{
10987+
"source_path_from_root": "/articles/reliability/reliability-guidance-overview.md",
10988+
"redirect_url": "/azure/reliability/overview-reliability-guidance",
10989+
"redirect_document_id": false
10990+
},
10991+
{
10992+
1098210993
"source_path_from_root": "/articles/event-hubs/move-across-regions.md",
1098310994
"redirect_url": "/azure/operational-excellence/relocation-event-hub",
1098410995
"redirect_document_id": false
@@ -11004,5 +11015,6 @@
1100411015
"redirect_document_id": false
1100511016
}
1100611017

11018+
1100711019
]
1100811020
}

articles/ai-services/openai/how-to/assistant.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -365,7 +365,7 @@ We had requested that the model generate an image of a sine wave. In order to do
365365

366366
```python
367367
data = json.loads(messages.model_dump_json(indent=2)) # Load JSON data into a Python object
368-
image_file_id = data['data'][1]['content'][0]['image_file']['file_id']
368+
image_file_id = data['data'][0]['content'][0]['image_file']['file_id']
369369

370370
print(image_file_id) # Outputs: assistant-1YGVTvNzc2JXajI5JU9F0HMD
371371
```

articles/ai-services/speech-service/batch-synthesis-properties.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ Batch synthesis properties are described in the following table.
3131
|`description`|The description of the batch synthesis.<br/><br/>This property is optional.|
3232
|`displayName`|The name of the batch synthesis. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.|
3333
|`id`|The batch synthesis job ID.<br/><br/>This property is read-only.|
34-
|`inputs`|The plain text or SSML to be synthesized.<br/><br/>When the `textType` is set to `"PlainText"`, provide plain text as shown here: `"inputs": [{"text": "The rainbow has seven colors."}]`. When the `textType` is set to `"SSML"`, provide text in the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) as shown here: `"inputs": [{"text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''><voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-JennyNeural'\''>The rainbow has seven colors.</voice></speak>"}]`.<br/><br/>Include up to 1,000 text objects if you want multiple audio output files. Here's example input text that should be synthesized to two audio output files: `"inputs": [{"text": "synthesize this to a file"},{"text": "synthesize this to another file"}]`. However, if the `properties.concatenateResult` property is set to `true`, then each synthesized result is written to the same audio output file.<br/><br/>You don't need separate text inputs for new paragraphs. Within any of the (up to 1,000) text inputs, you can specify new paragraphs using the "\r\n" (newline) string. Here's example input text with two paragraphs that should be synthesized to the same audio output file: `"inputs": [{"text": "synthesize this to a file\r\nsynthesize this to another paragraph in the same file"}]`<br/><br/>There are no paragraph limits, but the maximum JSON payload size (including all text inputs and other properties) is 500 kilobytes.<br/><br/>This property is required when you create a new batch synthesis job. This property isn't included in the response when you get the synthesis job.|
34+
|`inputs`|The plain text or SSML to be synthesized.<br/><br/>When the `textType` is set to `"PlainText"`, provide plain text as shown here: `"inputs": [{"text": "The rainbow has seven colors."}]`. When the `textType` is set to `"SSML"`, provide text in the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) as shown here: `"inputs": [{"text": "<speak version='\''1.0'\'' xml:lang='\''en-US'\''><voice xml:lang='\''en-US'\'' xml:gender='\''Female'\'' name='\''en-US-AvaNeural'\''>The rainbow has seven colors.</voice></speak>"}]`.<br/><br/>Include up to 1,000 text objects if you want multiple audio output files. Here's example input text that should be synthesized to two audio output files: `"inputs": [{"text": "synthesize this to a file"},{"text": "synthesize this to another file"}]`. However, if the `properties.concatenateResult` property is set to `true`, then each synthesized result is written to the same audio output file.<br/><br/>You don't need separate text inputs for new paragraphs. Within any of the (up to 1,000) text inputs, you can specify new paragraphs using the "\r\n" (newline) string. Here's example input text with two paragraphs that should be synthesized to the same audio output file: `"inputs": [{"text": "synthesize this to a file\r\nsynthesize this to another paragraph in the same file"}]`<br/><br/>There are no paragraph limits, but the maximum JSON payload size (including all text inputs and other properties) is 500 kilobytes.<br/><br/>This property is required when you create a new batch synthesis job. This property isn't included in the response when you get the synthesis job.|
3535
|`lastActionDateTime`|The most recent date and time when the `status` property value changed.<br/><br/>This property is read-only.|
3636
|`outputs.result`|The location of the batch synthesis result files with audio output and logs.<br/><br/>This property is read-only.|
3737
|`properties`|A defined set of optional batch synthesis configuration settings.|

articles/ai-services/speech-service/how-to-audio-content-creation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ You can get your content into the Audio Content Creation tool in either of two w
129129
130130
```xml
131131
<speak xmlns="http://www.w3.org/2001/10/synthesis" xmlns:mstts="http://www.w3.org/2001/mstts" version="1.0" xml:lang="en-US">
132-
<voice name="en-US-JennyNeural">
132+
<voice name="en-US-AvaNeural">
133133
Welcome to use Audio Content Creation <break time="10ms" />to customize audio output for your products.
134134
</voice>
135135
</speak>

articles/ai-services/speech-service/includes/how-to/compressed-audio-input/gstreamer-android.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ APP_PLATFORM = android-21
7373
APP_BUILD_SCRIPT = Android.mk
7474
```
7575

76-
You can build `libgstreamer_android.so` by using the following command on Ubuntu 18.04 or 20.04. The following command lines have been tested for [GStreamer Android version 1.14.4](https://gstreamer.freedesktop.org/data/pkg/android/1.14.4/gstreamer-1.0-android-universal-1.14.4.tar.bz2) with [Android NDK b16b.](https://dl.google.com/android/repository/android-ndk-r16b-linux-x86_64.zip)
76+
You can build `libgstreamer_android.so` by using the following command on Ubuntu 18.04 or 20.04. The following command lines have been tested for [GStreamer Android version 1.14.4](https://gstreamer.freedesktop.org/download/) with [Android NDK b16b.](https://dl.google.com/android/repository/android-ndk-r16b-linux-x86_64.zip)
7777

7878
```sh
7979
# Assuming wget and unzip are already installed on the system
@@ -83,7 +83,7 @@ wget https://dl.google.com/android/repository/android-ndk-r16b-linux-x86_64.zip
8383
unzip -q -o android-ndk-r16b-linux-x86_64.zip
8484
export PATH=$PATH:$(pwd)/android-ndk-r16b
8585
export NDK_PROJECT_PATH=$(pwd)/android-ndk-r16b
86-
wget https://gstreamer.freedesktop.org/data/pkg/android/1.14.4/gstreamer-1.0-android-universal-1.14.4.tar.bz2
86+
wget https://gstreamer.freedesktop.org/download/
8787
mkdir gstreamer_android
8888
tar -xjf gstreamer-1.0-android-universal-1.14.4.tar.bz2 -C $(pwd)/gstreamer_android/
8989
export GSTREAMER_ROOT_ANDROID=$(pwd)/gstreamer_android

articles/ai-services/speech-service/includes/how-to/speech-synthesis/cpp.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ void synthesizeSpeech()
2222
auto speechConfig = SpeechConfig::FromSubscription("YourSpeechKey", "YourSpeechRegion");
2323
// Set either the `SpeechSynthesisVoiceName` or `SpeechSynthesisLanguage`.
2424
speechConfig->SetSpeechSynthesisLanguage("en-US");
25-
speechConfig->SetSpeechSynthesisVoiceName("en-US-JennyNeural");
25+
speechConfig->SetSpeechSynthesisVoiceName("en-US-AvaNeural");
2626
}
2727
```
2828

@@ -157,7 +157,7 @@ To start using SSML for customization, make a minor change that switches the voi
157157

158158
```xml
159159
<speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
160-
<voice name="en-US-JennyNeural">
160+
<voice name="en-US-AvaNeural">
161161
When you're on the freeway, it's a good idea to use a GPS.
162162
</voice>
163163
</speak>
@@ -227,7 +227,7 @@ int main()
227227
speechConfig->SetProperty(PropertyId::SpeechServiceResponse_RequestSentenceBoundary, "true");
228228
229229
const auto ssml = R"(<speak version='1.0' xml:lang='en-US' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts'>
230-
<voice name = 'en-US-JennyNeural'>
230+
<voice name = 'en-US-AvaNeural'>
231231
<mstts:viseme type = 'redlips_front' />
232232
The rainbow has seven colors : <bookmark mark = 'colors_list_begin' />Red, orange, yellow, green, blue, indigo, and violet.<bookmark mark = 'colors_list_end' />.
233233
</voice>

articles/ai-services/speech-service/includes/how-to/speech-synthesis/csharp.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ static async Task SynthesizeAudioAsync()
2323
var speechConfig = SpeechConfig.FromSubscription("YourSpeechKey", "YourSpeechRegion");
2424
// Set either the `SpeechSynthesisVoiceName` or `SpeechSynthesisLanguage`.
2525
speechConfig.SpeechSynthesisLanguage = "en-US";
26-
speechConfig.SpeechSynthesisVoiceName = "en-US-JennyNeural";
26+
speechConfig.SpeechSynthesisVoiceName = "en-US-AvaNeural";
2727
}
2828
```
2929

@@ -160,7 +160,7 @@ To start using SSML for customization, you make a minor change that switches the
160160

161161
```xml
162162
<speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
163-
<voice name="en-US-JennyNeural">
163+
<voice name="en-US-AvaNeural">
164164
When you're on the freeway, it's a good idea to use a GPS.
165165
</voice>
166166
</speak>
@@ -188,7 +188,7 @@ To start using SSML for customization, you make a minor change that switches the
188188
```
189189

190190
> [!NOTE]
191-
> To change the voice without using SSML, you can set the property on `SpeechConfig` by using `SpeechConfig.SpeechSynthesisVoiceName = "en-US-JennyNeural";`.
191+
> To change the voice without using SSML, you can set the property on `SpeechConfig` by using `SpeechConfig.SpeechSynthesisVoiceName = "en-US-AvaNeural";`.
192192
193193
## Subscribe to synthesizer events
194194

@@ -213,7 +213,7 @@ class Program
213213
{
214214
var speechConfig = SpeechConfig.FromSubscription(speechKey, speechRegion);
215215

216-
var speechSynthesisVoiceName = "en-US-JennyNeural";
216+
var speechSynthesisVoiceName = "en-US-AvaNeural";
217217
var ssml = @$"<speak version='1.0' xml:lang='en-US' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts'>
218218
<voice name='{speechSynthesisVoiceName}'>
219219
<mstts:viseme type='redlips_front'/>

articles/ai-services/speech-service/includes/how-to/speech-synthesis/go.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -293,7 +293,7 @@ if err != nil {
293293
defer speechConfig.Close()
294294

295295
speechConfig.SetSpeechSynthesisLanguage("en-US")
296-
speechConfig.SetSpeechSynthesisVoiceName("en-US-JennyNeural")
296+
speechConfig.SetSpeechSynthesisVoiceName("en-US-AvaNeural")
297297
```
298298

299299
All neural voices are multilingual and fluent in their own language and English. For example, if the input text in English is, "I'm excited to try text to speech," and you select `es-ES-ElviraNeural`, the text is spoken in English with a Spanish accent.
@@ -320,7 +320,7 @@ First, create a new XML file for the SSML configuration in your root project dir
320320

321321
```xml
322322
<speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
323-
<voice name="en-US-JennyNeural">
323+
<voice name="en-US-AvaNeural">
324324
When you're on the freeway, it's a good idea to use a GPS.
325325
</voice>
326326
</speak>
@@ -329,7 +329,7 @@ First, create a new XML file for the SSML configuration in your root project dir
329329
Next, you need to change the speech synthesis request to reference your XML file. The request is mostly the same, but instead of using the `SpeakTextAsync()` function, you use `SpeakSsmlAsync()`. This function expects an XML string, so you first load your SSML configuration as a string. From this point, the result object is exactly the same as previous examples.
330330

331331
> [!NOTE]
332-
> To set the voice without using SSML, you can set the property on `SpeechConfig` by using `speechConfig.SetSpeechSynthesisVoiceName("en-US-JennyNeural")`.
332+
> To set the voice without using SSML, you can set the property on `SpeechConfig` by using `speechConfig.SetSpeechSynthesisVoiceName("en-US-AvaNeural")`.
333333
334334
## Subscribe to synthesizer events
335335

@@ -445,7 +445,7 @@ func main() {
445445
speechSynthesizer.VisemeReceived(visemeReceivedHandler)
446446
speechSynthesizer.WordBoundary(wordBoundaryHandler)
447447

448-
speechSynthesisVoiceName := "en-US-JennyNeural"
448+
speechSynthesisVoiceName := "en-US-AvaNeural"
449449

450450
ssml := fmt.Sprintf(`<speak version='1.0' xml:lang='en-US' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts'>
451451
<voice name='%s'>

articles/ai-services/speech-service/includes/how-to/speech-synthesis/java.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ public static void main(String[] args) {
2323
SpeechConfig speechConfig = SpeechConfig.fromSubscription("YourSpeechKey", "YourSpeechRegion");
2424
// Set either the `SpeechSynthesisVoiceName` or `SpeechSynthesisLanguage`.
2525
speechConfig.setSpeechSynthesisLanguage("en-US");
26-
speechConfig.setSpeechSynthesisVoiceName("en-US-JennyNeural");
26+
speechConfig.setSpeechSynthesisVoiceName("en-US-AvaNeural");
2727
}
2828
```
2929

@@ -160,7 +160,7 @@ To start using SSML for customization, you make a minor change that switches the
160160

161161
```xml
162162
<speak version="1.0" xmlns="https://www.w3.org/2001/10/synthesis" xml:lang="en-US">
163-
<voice name="en-US-JennyNeural">
163+
<voice name="en-US-AvaNeural">
164164
When you're on the freeway, it's a good idea to use a GPS.
165165
</voice>
166166
</speak>
@@ -201,7 +201,7 @@ To start using SSML for customization, you make a minor change that switches the
201201
```
202202

203203
> [!NOTE]
204-
> To change the voice without using SSML, set the property on `SpeechConfig` by using `SpeechConfig.setSpeechSynthesisVoiceName("en-US-JennyNeural");`.
204+
> To change the voice without using SSML, set the property on `SpeechConfig` by using `SpeechConfig.setSpeechSynthesisVoiceName("en-US-AvaNeural");`.
205205
206206
## Subscribe to synthesizer events
207207

@@ -232,7 +232,7 @@ public class SpeechSynthesis {
232232
// Required for WordBoundary event sentences.
233233
speechConfig.setProperty(PropertyId.SpeechServiceResponse_RequestSentenceBoundary, "true");
234234

235-
String speechSynthesisVoiceName = "en-US-JennyNeural";
235+
String speechSynthesisVoiceName = "en-US-AvaNeural";
236236

237237
String ssml = String.format("<speak version='1.0' xml:lang='en-US' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts'>"
238238
.concat(String.format("<voice name='%s'>", speechSynthesisVoiceName))

0 commit comments

Comments
 (0)