You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> Language detection with custom models can only be used with real-time speech to text and speech translation. Batch transcription only supports language detection for base models.
@@ -587,35 +587,6 @@ var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fr
587
587
588
588
::: zone-end
589
589
590
-
### Using Speech-to-text batch transcription
591
-
592
-
To identify languages in [Batch transcription](batch-transcription.md), you need to use `languageIdentification` property in the body of your [transcription REST request](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create). The example in this section shows the usage of `languageIdentification` property with four candidate languages.
593
-
594
-
> [!WARNING]
595
-
> Batch transcription only supports language identification for base models. If both language identification and a custom model are specified in the transcription request, the service will fall back to use the base models for the specified candidate languages. This may result in unexpected recognition results.
596
-
>
597
-
> If your speech to text scenario requires both language identification and custom models, use [real-time speech to text](#using-speech-to-text-custom-models) instead of batch transcription.
598
-
599
-
```json
600
-
{
601
-
<...>
602
-
603
-
"properties": {
604
-
<...>
605
-
606
-
"languageIdentification": {
607
-
"candidateLocales": [
608
-
"en-US",
609
-
"ja-JP",
610
-
"zh-CN",
611
-
"hi-IN"
612
-
]
613
-
},
614
-
<...>
615
-
}
616
-
}
617
-
```
618
-
619
590
## Speech translation
620
591
621
592
You use Speech translation when you need to identify the language in an audio source and then translate it to another language. For more information, see [Speech translation overview](speech-translation.md).
@@ -1119,6 +1090,37 @@ When you run language ID in a container, use the `SourceLanguageRecognizer` obje
1119
1090
For more information about containers, see the [language identification speech containers](speech-container-lid.md#use-the-container) how-to guide.
1120
1091
1121
1092
1093
+
## Speech-to-text batch transcription
1094
+
1095
+
To identify languages with [Batch transcription RESTAPI](batch-transcription.md), you need to use `languageIdentification` property in the body of your [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) request.
1096
+
1097
+
> [!WARNING]
1098
+
> Batch transcription only supports language identification for base models. If both language identification and a custom model are specified in the transcription request, the service will fall back to use the base models for the specified candidate languages. This may result in unexpected recognition results.
1099
+
>
1100
+
> If your speech to text scenario requires both language identification and custom models, use [real-time speech to text](#speech-to-text-custom-models) instead of batch transcription.
1101
+
1102
+
The following example shows the usage of the `languageIdentification` property with four candidate languages. For more information about request properties see [Create a batch transcription](batch-transcription-create.md#request-configuration-options).
1103
+
1104
+
```json
1105
+
{
1106
+
<...>
1107
+
1108
+
"properties": {
1109
+
<...>
1110
+
1111
+
"languageIdentification": {
1112
+
"candidateLocales": [
1113
+
"en-US",
1114
+
"ja-JP",
1115
+
"zh-CN",
1116
+
"hi-IN"
1117
+
]
1118
+
},
1119
+
<...>
1120
+
}
1121
+
}
1122
+
```
1123
+
1122
1124
## Next steps
1123
1125
1124
1126
* [Try the speech to text quickstart](get-started-speech-to-text.md)
0 commit comments