You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -263,16 +263,16 @@ To use a Whisper model for batch transcription, you need to set the `model` prop
263
263
::: zone pivot="rest-api"
264
264
You can make a [Models_ListBaseModels](/rest/api/speechtotext/models/list-base-models) request to get available base models for all locales.
265
265
266
-
Make an HTTP GET request as shown in the following example for the `eastus` region. Replace `YourSubscriptionKey` with your Speech resource key. Replace `eastus` if you're using a different region.
266
+
Make an HTTP GET request as shown in the following example for the `eastus` region. Replace `YourSpeechResoureKey` with your Speech resource key. Replace `eastus` if you're using a different region.
267
267
268
268
```azurecli-interactive
269
-
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
269
+
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base" -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey"
270
270
```
271
271
272
272
By default, only the 100 oldest base models are returned. Use the `skip` and `top` query parameters to page through the results. For example, the following request returns the next 100 base models after the first 100.
273
273
274
274
```azurecli-interactive
275
-
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base?skip=100&top=100" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
275
+
curl -v -X GET "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base?skip=100&top=100" -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey"
276
276
```
277
277
278
278
::: zone-end
@@ -323,10 +323,10 @@ The `displayName` property of a Whisper model contains "Whisper" as shown in thi
323
323
324
324
::: zone pivot="rest-api"
325
325
326
-
You set the full model URI as shown in this example for the `eastus` region. Replace `YourSubscriptionKey` with your Speech resource key. Replace `eastus` if you're using a different region.
326
+
You set the full model URI as shown in this example for the `eastus` region. Replace `YourSpeechResoureKey` with your Speech resource key. Replace `eastus` if you're using a different region.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/batch-transcription-get.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,10 +25,10 @@ To get the status of the transcription job, call the [Transcriptions_Get](/rest/
25
25
> [!IMPORTANT]
26
26
> Batch transcription jobs are scheduled on a best-effort basis. At peak hours, it may take up to 30 minutes or longer for a transcription job to start processing. Most of the time during the execution the transcription status will be `Running`. This is because the job is assigned the `Running` status the moment it moves to the batch transcription backend system. When the base model is used, this assignment happens almost immediately; it's slightly slower for custom models. Thus, the amount of time a transcription job spends in the `Running` state doesn't correspond to the actual transcription time but also includes waiting time in the internal queues.
27
27
28
-
Make an HTTP GET request using the URI as shown in the following example. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
28
+
Make an HTTP GET request using the URI as shown in the following example. Replace `YourTranscriptionId` with your transcription ID, replace `YourSpeechResoureKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
29
29
30
30
```azurecli-interactive
31
-
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/YourTranscriptionId" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
31
+
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/YourTranscriptionId" -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey"
32
32
```
33
33
34
34
You should receive a response body in the following format:
@@ -136,10 +136,10 @@ spx help batch transcription
136
136
137
137
The [Transcriptions_ListFiles](/rest/api/speechtotext/transcriptions/list-files) operation returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
138
138
139
-
Make an HTTP GET request using the "files" URI from the previous response body. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
139
+
Make an HTTP GET request using the "files" URI from the previous response body. Replace `YourTranscriptionId` with your transcription ID, replace `YourSpeechResoureKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
140
140
141
141
```azurecli-interactive
142
-
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/YourTranscriptionId/files" -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey"
142
+
curl -v -X GET "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.2/transcriptions/YourTranscriptionId/files" -H "Ocp-Apim-Subscription-Key: YourSpeechResoureKey"
143
143
```
144
144
145
145
You should receive a response body in the following format:
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/fast-transcription-create.md
+15-15Lines changed: 15 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,17 +47,17 @@ Make a multipart/form-data POST request to the `transcriptions` endpoint with th
47
47
48
48
The following example shows how to transcribe an audio file with a specified locale. If you know the locale of the audio file, you can specify it to improve transcription accuracy and minimize the latency.
49
49
50
-
- Replace `YourSubscriptionKey` with your Speech resource key.
50
+
- Replace `YourSpeechResoureKey` with your Speech resource key.
51
51
- Replace `YourServiceRegion` with your Speech resource region.
52
52
- Replace `YourAudioFile` with the path to your audio file.
53
53
54
54
> [!IMPORTANT]
55
-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
55
+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
@@ -301,17 +301,17 @@ The following example shows how to transcribe an audio file with language identi
301
301
> [!NOTE]
302
302
> The language identification in fast transcription is designed to identify one main language locale per audio file. If you need to transcribe multi-lingual contents in the audio, please consider [multi-lingual transcription (preview)](?tabs=multilingual-transcription-on).
303
303
304
-
- Replace `YourSubscriptionKey` with your Speech resource key.
304
+
- Replace `YourSpeechResoureKey` with your Speech resource key.
305
305
- Replace `YourServiceRegion` with your Speech resource region.
306
306
- Replace `YourAudioFile` with the path to your audio file.
307
307
308
308
> [!IMPORTANT]
309
-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
309
+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
@@ -590,17 +590,17 @@ Make a multipart/form-data POST request to the `transcriptions` endpoint with th
590
590
591
591
The following example shows how to transcribe an audio file with the latest multi-lingual speech transcription model. If your audio contains multi-lingual contents that you want to transcribe continuously and accurately, you can use the latest multi-lingual speech transcription model without specifying the locale codes.
592
592
593
-
- Replace `YourSubscriptionKey` with your Speech resource key.
593
+
- Replace `YourSpeechResoureKey` with your Speech resource key.
594
594
- Replace `YourServiceRegion` with your Speech resource region.
595
595
- Replace `YourAudioFile` with the path to your audio file.
596
596
597
597
> [!IMPORTANT]
598
-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
598
+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
@@ -1205,17 +1205,17 @@ Make a multipart/form-data POST request to the `transcriptions` endpoint with th
1205
1205
1206
1206
The following example shows how to transcribe an audio file with diarization enabled. Diarization distinguishes between different speakers in the conversation. The Speech service provides information about which speaker was speaking a particular part of the transcribed speech.
1207
1207
1208
-
- Replace `YourSubscriptionKey` with your Speech resource key.
1208
+
- Replace `YourSpeechResoureKey` with your Speech resource key.
1209
1209
- Replace `YourServiceRegion` with your Speech resource region.
1210
1210
- Replace `YourAudioFile` with the path to your audio file.
1211
1211
1212
1212
> [!IMPORTANT]
1213
-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
1213
+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
@@ -1477,17 +1477,17 @@ Make a multipart/form-data POST request to the `transcriptions` endpoint with th
1477
1477
1478
1478
The following example shows how to transcribe an audio file that has one or two channels. Multi-channel transcriptions are useful for audio files with multiple channels, such as audio files with multiple speakers or audio files with background noise. By default, the fast transcription API merges all input channels into a single channel and then performs the transcription. If this isn't desirable, channels can be transcribed independently without merging.
1479
1479
1480
-
- Replace `YourSubscriptionKey` with your Speech resource key.
1480
+
- Replace `YourSpeechResoureKey` with your Speech resource key.
1481
1481
- Replace `YourServiceRegion` with your Speech resource region.
1482
1482
- Replace `YourAudioFile` with the path to your audio file.
1483
1483
1484
1484
> [!IMPORTANT]
1485
-
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSubscriptionKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
1485
+
> For the recommended keyless authentication with Microsoft Entra ID, replace `--header 'Ocp-Apim-Subscription-Key: YourSpeechResoureKey'` with `--header "Authorization: Bearer YourAccessToken"`. For more information about keyless authentication, see the [role-based access control](./role-based-access-control.md#authentication-with-keys-and-tokens) how-to guide.
0 commit comments