You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -263,16 +263,16 @@ To use a Whisper model for batch transcription, you need to set the `model` prop
263
263
::: zone pivot="rest-api"
264
264
You can make a [Models - List Base Models](/rest/api/speechtotext/models/list-base-models) request to get available base models for all locales.
265
265
266
-
Make an HTTP GET request as shown in the following example for the `eastus` region. Replace `YourSubscriptionKey` with your Azure AI Foundry resource key. Replace `eastus` if you're using a different region.
266
+
Make an HTTP GET request as shown in the following example for the `eastus` region. Replace `YourSpeechResoureKey` with your Azure AI Foundry resource key. Replace `eastus` if you're using a different region.
267
267
268
268
```azurecli-interactive
269
-
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/models/base?api-version=2024-11-15" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
269
+
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/models/base?api-version=2024-11-15" -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey"
270
270
```
271
271
272
272
By default, only the 100 oldest base models are returned. Use the `skip` and `top` query parameters to page through the results. For example, the following request returns the next 100 base models after the first 100.
273
273
274
274
```azurecli-interactive
275
-
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/models/base?api-version=2024-11-15&skip=100&top=100" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
275
+
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/models/base?api-version=2024-11-15&skip=100&top=100" -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey"
276
276
```
277
277
278
278
::: zone-end
@@ -326,10 +326,10 @@ The `displayName` property of a Whisper model contains "Whisper" as shown in thi
326
326
327
327
::: zone pivot="rest-api"
328
328
329
-
You set the full model URI as shown in this example for the `eastus` region. Replace `YourSubscriptionKey` with your Azure AI Foundry resource key. Replace `eastus` if you're using a different region.
329
+
You set the full model URI as shown in this example for the `eastus` region. Replace `YourSpeechResoureKey` with your Azure AI Foundry resource key. Replace `eastus` if you're using a different region.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/batch-transcription-get.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,10 +25,10 @@ To get the status of the transcription job, call the [Transcriptions - Get](/res
25
25
> [!IMPORTANT]
26
26
> Batch transcription jobs are scheduled on a best-effort basis. At peak hours, it might take up to 30 minutes or longer for a transcription job to start processing. Most of the time during the execution the transcription status is `Running`. The reason is because the job is assigned the `Running` status the moment it moves to the batch transcription backend system. When the base model is used, this assignment happens almost immediately; it's slightly slower for custom models. Thus, the amount of time a transcription job spends in the `Running` state doesn't correspond to the actual transcription time but also includes waiting time in the internal queues.
27
27
28
-
Make an HTTP GET request using the URI as shown in the following example. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
28
+
Make an HTTP GET request using the URI as shown in the following example. Replace `YourTranscriptionId` with your transcription ID, replace `YourSpeechResoureKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
29
29
30
30
```azurecli-interactive
31
-
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/transcriptions/YourTranscriptionId?api-version=2024-11-15" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
31
+
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/transcriptions/YourTranscriptionId?api-version=2024-11-15" -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey"
32
32
```
33
33
34
34
You should receive a response body in the following format:
@@ -135,10 +135,10 @@ spx help batch transcription
135
135
136
136
The [Transcriptions - List Files](/rest/api/speechtotext/transcriptions/list-files) operation returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
137
137
138
-
Make an HTTP GET request using the "files" URI from the previous response body. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
138
+
Make an HTTP GET request using the "files" URI from the previous response body. Replace `YourTranscriptionId` with your transcription ID, replace `YourSpeechResoureKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
139
139
140
140
```azurecli-interactive
141
-
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/transcriptions/YourTranscriptionId/files?api-version=2024-11-15" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
141
+
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/transcriptions/YourTranscriptionId/files?api-version=2024-11-15" -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey"
142
142
```
143
143
144
144
You should receive a response body in the following format:
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/fast-transcription-create.md
+15-15Lines changed: 15 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,17 +44,17 @@ Make a multipart/form-data POST request to the `transcriptions` endpoint with th
44
44
45
45
The following example shows how to transcribe an audio file with a specified locale. If you know the locale of the audio file, you can specify it to improve transcription accuracy and minimize the latency.
46
46
47
-
- Replace `YourSubscriptionKey` with your Speech resource key.
47
+
- Replace `YourSpeechResoureKey` with your Speech resource key.
48
48
- Replace `YourServiceRegion` with your Speech resource region.
49
49
- Replace `YourAudioFile` with the path to your audio file.
50
50
51
51
> [!IMPORTANT]
52
-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
52
+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
@@ -298,17 +298,17 @@ The following example shows how to transcribe an audio file with language identi
298
298
> [!NOTE]
299
299
> The language identification in fast transcription is designed to identify one main language locale per audio file. If you need to transcribe multi-lingual contents in the audio, please consider [multi-lingual transcription (preview)](?tabs=multilingual-transcription-on).
300
300
301
-
- Replace `YourSubscriptionKey` with your Speech resource key.
301
+
- Replace `YourSpeechResoureKey` with your Speech resource key.
302
302
- Replace `YourServiceRegion` with your Speech resource region.
303
303
- Replace `YourAudioFile` with the path to your audio file.
304
304
305
305
> [!IMPORTANT]
306
-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
306
+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
@@ -587,17 +587,17 @@ Make a multipart/form-data POST request to the `transcriptions` endpoint with th
587
587
588
588
The following example shows how to transcribe an audio file with the latest multi-lingual speech transcription model. If your audio contains multi-lingual contents that you want to transcribe continuously and accurately, you can use the latest multi-lingual speech transcription model without specifying the locale codes.
589
589
590
-
- Replace `YourSubscriptionKey` with your Speech resource key.
590
+
- Replace `YourSpeechResoureKey` with your Speech resource key.
591
591
- Replace `YourServiceRegion` with your Speech resource region.
592
592
- Replace `YourAudioFile` with the path to your audio file.
593
593
594
594
> [!IMPORTANT]
595
-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
595
+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
@@ -1202,17 +1202,17 @@ Make a multipart/form-data POST request to the `transcriptions` endpoint with th
1202
1202
1203
1203
The following example shows how to transcribe an audio file with diarization enabled. Diarization distinguishes between different speakers in the conversation. The Speech service provides information about which speaker was speaking a particular part of the transcribed speech.
1204
1204
1205
-
- Replace `YourSubscriptionKey` with your Speech resource key.
1205
+
- Replace `YourSpeechResoureKey` with your Speech resource key.
1206
1206
- Replace `YourServiceRegion` with your Speech resource region.
1207
1207
- Replace `YourAudioFile` with the path to your audio file.
1208
1208
1209
1209
> [!IMPORTANT]
1210
-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
1210
+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
@@ -1474,17 +1474,17 @@ Make a multipart/form-data POST request to the `transcriptions` endpoint with th
1474
1474
1475
1475
The following example shows how to transcribe an audio file that has one or two channels. Multi-channel transcriptions are useful for audio files with multiple channels, such as audio files with multiple speakers or audio files with background noise. By default, the fast transcription API merges all input channels into a single channel and then performs the transcription. If this isn't desirable, channels can be transcribed independently without merging.
1476
1476
1477
-
- Replace `YourSubscriptionKey` with your Speech resource key.
1477
+
- Replace `YourSpeechResoureKey` with your Speech resource key.
1478
1478
- Replace `YourServiceRegion` with your Speech resource region.
1479
1479
- Replace `YourAudioFile` with the path to your audio file.
1480
1480
1481
1481
> [!IMPORTANT]
1482
-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
1482
+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
0 commit comments