Skip to content

Commit dd02240

Browse files
authored
Merge pull request #5188 from MicrosoftDocs/main
Merge main to live, 4 AM
2 parents 71dc5ee + f53d86b commit dd02240

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+162
-162
lines changed

articles/ai-services/speech-service/audio-processing-speech-sdk.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ var recognizer = new SpeechRecognizer(speechConfig, audioInput);
3939
### [C++](#tab/cpp)
4040

4141
```cpp
42-
auto speechConfig = SpeechConfig::FromEndpoint("YourServiceEndpoint", "YourSubscriptionKey");
42+
auto speechConfig = SpeechConfig::FromEndpoint("YourServiceEndpoint", "YourSpeechResoureKey");
4343

4444
auto audioProcessingOptions = AudioProcessingOptions::Create(AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT);
4545
auto audioInput = AudioConfig::FromDefaultMicrophoneInput(audioProcessingOptions);
@@ -80,7 +80,7 @@ var recognizer = new SpeechRecognizer(speechConfig, audioInput);
8080
### [C++](#tab/cpp)
8181

8282
```cpp
83-
auto speechConfig = SpeechConfig::FromEndpoint("YourServiceEndpoint", "YourSubscriptionKey");
83+
auto speechConfig = SpeechConfig::FromEndpoint("YourServiceEndpoint", "YourSpeechResoureKey");
8484

8585
auto audioProcessingOptions = AudioProcessingOptions::Create(AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT, PresetMicrophoneArrayGeometry::Linear2);
8686
auto audioInput = AudioConfig::FromMicrophoneInput("hw:0,1", audioProcessingOptions);
@@ -132,7 +132,7 @@ var recognizer = new SpeechRecognizer(speechConfig, audioInput);
132132
### [C++](#tab/cpp)
133133

134134
```cpp
135-
auto speechConfig = SpeechConfig::FromEndpoint("YourServiceEndpoint", "YourSubscriptionKey");
135+
auto speechConfig = SpeechConfig::FromEndpoint("YourServiceEndpoint", "YourSpeechResoureKey");
136136

137137
MicrophoneArrayGeometry microphoneArrayGeometry
138138
{
@@ -188,7 +188,7 @@ var recognizer = new SpeechRecognizer(speechConfig, audioInput);
188188
### [C++](#tab/cpp)
189189

190190
```cpp
191-
auto speechConfig = SpeechConfig::FromEndpoint("YourServiceEndpoint", "YourSubscriptionKey");
191+
auto speechConfig = SpeechConfig::FromEndpoint("YourServiceEndpoint", "YourSpeechResoureKey");
192192

193193
auto audioProcessingOptions = AudioProcessingOptions::Create(AUDIO_INPUT_PROCESSING_DISABLE_ECHO_CANCELLATION | AUDIO_INPUT_PROCESSING_DISABLE_NOISE_SUPPRESSION | AUDIO_INPUT_PROCESSING_ENABLE_DEFAULT);
194194
auto audioInput = AudioConfig::FromDefaultMicrophoneInput(audioProcessingOptions);
@@ -241,7 +241,7 @@ var recognizer = new SpeechRecognizer(speechConfig, audioInput);
241241
### [C++](#tab/cpp)
242242

243243
```cpp
244-
auto speechConfig = SpeechConfig::FromEndpoint("YourServiceEndpoint", "YourSubscriptionKey");
244+
auto speechConfig = SpeechConfig::FromEndpoint("YourServiceEndpoint", "YourSpeechResoureKey");
245245

246246
MicrophoneArrayGeometry microphoneArrayGeometry
247247
{

articles/ai-services/speech-service/batch-transcription-create.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -39,12 +39,12 @@ For more information, see [Request configuration options](#request-configuration
3939

4040
Make an HTTP POST request that uses the URI as shown in the following [Transcriptions - Submit](/rest/api/speechtotext/transcriptions/submit) example.
4141

42-
- Replace `YourSubscriptionKey` with your Azure AI Foundry resource key.
42+
- Replace `YourSpeechResoureKey` with your Azure AI Foundry resource key.
4343
- Replace `YourServiceRegion` with your Azure AI Foundry resource region.
4444
- Set the request body properties as previously described.
4545

4646
```azurecli-interactive
47-
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
47+
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey" -H "Content-Type: application/json" -d '{
4848
"contentUrls": [
4949
"https://crbn.us/hello.wav",
5050
"https://crbn.us/whatstheweatherlike.wav"
@@ -215,7 +215,7 @@ Optionally, you can modify the previous [create transcription example](#create-a
215215
::: zone pivot="rest-api"
216216

217217
```azurecli-interactive
218-
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
218+
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey" -H "Content-Type: application/json" -d '{
219219
"contentUrls": [
220220
"https://crbn.us/hello.wav",
221221
"https://crbn.us/whatstheweatherlike.wav"
@@ -263,16 +263,16 @@ To use a Whisper model for batch transcription, you need to set the `model` prop
263263
::: zone pivot="rest-api"
264264
You can make a [Models - List Base Models](/rest/api/speechtotext/models/list-base-models) request to get available base models for all locales.
265265

266-
Make an HTTP GET request as shown in the following example for the `eastus` region. Replace `YourSubscriptionKey` with your Azure AI Foundry resource key. Replace `eastus` if you're using a different region.
266+
Make an HTTP GET request as shown in the following example for the `eastus` region. Replace `YourSpeechResoureKey` with your Azure AI Foundry resource key. Replace `eastus` if you're using a different region.
267267

268268
```azurecli-interactive
269-
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/models/base?api-version=2024-11-15" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
269+
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/models/base?api-version=2024-11-15" -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey"
270270
```
271271

272272
By default, only the 100 oldest base models are returned. Use the `skip` and `top` query parameters to page through the results. For example, the following request returns the next 100 base models after the first 100.
273273

274274
```azurecli-interactive
275-
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/models/base?api-version=2024-11-15&skip=100&top=100" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
275+
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/models/base?api-version=2024-11-15&skip=100&top=100" -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey"
276276
```
277277

278278
::: zone-end
@@ -326,10 +326,10 @@ The `displayName` property of a Whisper model contains "Whisper" as shown in thi
326326

327327
::: zone pivot="rest-api"
328328

329-
You set the full model URI as shown in this example for the `eastus` region. Replace `YourSubscriptionKey` with your Azure AI Foundry resource key. Replace `eastus` if you're using a different region.
329+
You set the full model URI as shown in this example for the `eastus` region. Replace `YourSpeechResoureKey` with your Azure AI Foundry resource key. Replace `eastus` if you're using a different region.
330330

331331
```azurecli-interactive
332-
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
332+
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey" -H "Content-Type: application/json" -d '{
333333
"contentUrls": [
334334
"https://crbn.us/hello.wav",
335335
"https://crbn.us/whatstheweatherlike.wav"

articles/ai-services/speech-service/batch-transcription-get.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -25,10 +25,10 @@ To get the status of the transcription job, call the [Transcriptions - Get](/res
2525
> [!IMPORTANT]
2626
> Batch transcription jobs are scheduled on a best-effort basis. At peak hours, it might take up to 30 minutes or longer for a transcription job to start processing. Most of the time during the execution the transcription status is `Running`. The reason is because the job is assigned the `Running` status the moment it moves to the batch transcription backend system. When the base model is used, this assignment happens almost immediately; it's slightly slower for custom models. Thus, the amount of time a transcription job spends in the `Running` state doesn't correspond to the actual transcription time but also includes waiting time in the internal queues.
2727
28-
Make an HTTP GET request using the URI as shown in the following example. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
28+
Make an HTTP GET request using the URI as shown in the following example. Replace `YourTranscriptionId` with your transcription ID, replace `YourSpeechResoureKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
2929

3030
```azurecli-interactive
31-
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/transcriptions/YourTranscriptionId?api-version=2024-11-15" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
31+
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/transcriptions/YourTranscriptionId?api-version=2024-11-15" -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey"
3232
```
3333

3434
You should receive a response body in the following format:
@@ -135,10 +135,10 @@ spx help batch transcription
135135

136136
The [Transcriptions - List Files](/rest/api/speechtotext/transcriptions/list-files) operation returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
137137

138-
Make an HTTP GET request using the "files" URI from the previous response body. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
138+
Make an HTTP GET request using the "files" URI from the previous response body. Replace `YourTranscriptionId` with your transcription ID, replace `YourSpeechResoureKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
139139

140140
```azurecli-interactive
141-
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/transcriptions/YourTranscriptionId/files?api-version=2024-11-15" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
141+
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/transcriptions/YourTranscriptionId/files?api-version=2024-11-15" -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey"
142142
```
143143

144144
You should receive a response body in the following format:

articles/ai-services/speech-service/fast-transcription-create.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -44,17 +44,17 @@ Make a multipart/form-data POST request to the `transcriptions` endpoint with th
4444

4545
The following example shows how to transcribe an audio file with a specified locale. If you know the locale of the audio file, you can specify it to improve transcription accuracy and minimize the latency.
4646

47-
- Replace `YourSubscriptionKey` with your Speech resource key.
47+
- Replace `YourSpeechResoureKey` with your Speech resource key.
4848
- Replace `YourServiceRegion` with your Speech resource region.
4949
- Replace `YourAudioFile` with the path to your audio file.
5050

5151
> [!IMPORTANT]
52-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
52+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
5353
5454
```azurecli-interactive
5555
curl --location 'https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe?api-version=2024-11-15' \
5656
--header 'Content-Type: multipart/form-data' \
57-
--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey' \
57+
--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey' \
5858
--form 'audio=@"YourAudioFile"' \
5959
--form 'definition="{
6060
"locales":["en-US"]}"'
@@ -298,17 +298,17 @@ The following example shows how to transcribe an audio file with language identi
298298
> [!NOTE]
299299
> The language identification in fast transcription is designed to identify one main language locale per audio file. If you need to transcribe multi-lingual contents in the audio, please consider [multi-lingual transcription (preview)](?tabs=multilingual-transcription-on).
300300
301-
- Replace `YourSubscriptionKey` with your Speech resource key.
301+
- Replace `YourSpeechResoureKey` with your Speech resource key.
302302
- Replace `YourServiceRegion` with your Speech resource region.
303303
- Replace `YourAudioFile` with the path to your audio file.
304304

305305
> [!IMPORTANT]
306-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
306+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
307307
308308
```azurecli-interactive
309309
curl --location 'https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe?api-version=2024-11-15' \
310310
--header 'Content-Type: multipart/form-data' \
311-
--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey' \
311+
--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey' \
312312
--form 'audio=@"YourAudioFile"' \
313313
--form 'definition="{
314314
"locales":["en-US","ja-JP"]}"'
@@ -587,17 +587,17 @@ Make a multipart/form-data POST request to the `transcriptions` endpoint with th
587587

588588
The following example shows how to transcribe an audio file with the latest multi-lingual speech transcription model. If your audio contains multi-lingual contents that you want to transcribe continuously and accurately, you can use the latest multi-lingual speech transcription model without specifying the locale codes.
589589

590-
- Replace `YourSubscriptionKey` with your Speech resource key.
590+
- Replace `YourSpeechResoureKey` with your Speech resource key.
591591
- Replace `YourServiceRegion` with your Speech resource region.
592592
- Replace `YourAudioFile` with the path to your audio file.
593593

594594
> [!IMPORTANT]
595-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
595+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
596596
597597
```azurecli-interactive
598598
curl --location 'https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe?api-version=2024-11-15' \
599599
--header 'Content-Type: multipart/form-data' \
600-
--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey' \
600+
--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey' \
601601
--form 'audio=@"YourAudioFile"' \
602602
--form 'definition="{
603603
"locales":[]}"'
@@ -1202,17 +1202,17 @@ Make a multipart/form-data POST request to the `transcriptions` endpoint with th
12021202

12031203
The following example shows how to transcribe an audio file with diarization enabled. Diarization distinguishes between different speakers in the conversation. The Speech service provides information about which speaker was speaking a particular part of the transcribed speech.
12041204

1205-
- Replace `YourSubscriptionKey` with your Speech resource key.
1205+
- Replace `YourSpeechResoureKey` with your Speech resource key.
12061206
- Replace `YourServiceRegion` with your Speech resource region.
12071207
- Replace `YourAudioFile` with the path to your audio file.
12081208

12091209
> [!IMPORTANT]
1210-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
1210+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
12111211
12121212
```azurecli-interactive
12131213
curl --location 'https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe?api-version=2024-11-15' \
12141214
--header 'Content-Type: multipart/form-data' \
1215-
--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey' \
1215+
--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey' \
12161216
--form 'audio=@"YourAudioFile"' \
12171217
--form 'definition="{
12181218
"locales":["en-US"],
@@ -1474,17 +1474,17 @@ Make a multipart/form-data POST request to the `transcriptions` endpoint with th
14741474

14751475
The following example shows how to transcribe an audio file that has one or two channels. Multi-channel transcriptions are useful for audio files with multiple channels, such as audio files with multiple speakers or audio files with background noise. By default, the fast transcription API merges all input channels into a single channel and then performs the transcription. If this isn't desirable, channels can be transcribed independently without merging.
14761476

1477-
- Replace `YourSubscriptionKey` with your Speech resource key.
1477+
- Replace `YourSpeechResoureKey` with your Speech resource key.
14781478
- Replace `YourServiceRegion` with your Speech resource region.
14791479
- Replace `YourAudioFile` with the path to your audio file.
14801480

14811481
> [!IMPORTANT]
1482-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
1482+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
14831483
14841484
```azurecli-interactive
14851485
curl --location 'https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/transcriptions:transcribe?api-version=2024-11-15' \
14861486
--header 'Content-Type: multipart/form-data' \
1487-
--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey' \
1487+
--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey' \
14881488
--form 'audio=@"YourAudioFile"' \
14891489
--form 'definition="{
14901490
"locales":["en-US"],

0 commit comments

Comments
 (0)