Skip to content

Commit de03678

Browse files
committed
LID and code style
1 parent aa8493d commit de03678

File tree

8 files changed

+39
-4
lines changed

8 files changed

+39
-4
lines changed

articles/cognitive-services/Speech-Service/includes/how-to/recognize-speech/cpp.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -193,6 +193,12 @@ speechConfig->SetSpeechRecognitionLanguage("de-DE");
193193
194194
[`SetSpeechRecognitionLanguage`](/cpp/cognitive-services/speech/speechconfig#setspeechrecognitionlanguage) is a parameter that takes a string as an argument. Refer to the [list of supported speech-to-text locales](../../../language-support.md?tabs=stt).
195195
196+
## Language identification
197+
198+
You can use [language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-to-text) with Speech-to-text recognition when you need to identify the language in an audio source and then transcribe it to text.
199+
200+
For a complete code sample, see [language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-to-text).
201+
196202
## Use a custom endpoint
197203
198204
With [Custom Speech](../../../custom-speech-overview.md), you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint. The following example shows how to set a custom endpoint.

articles/cognitive-services/Speech-Service/includes/how-to/recognize-speech/csharp.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -268,6 +268,11 @@ speechConfig.SpeechRecognitionLanguage = "it-IT";
268268

269269
The [`SpeechRecognitionLanguage`](/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.speechrecognitionlanguage) property expects a language-locale format string. Refer to the [list of supported speech-to-text locales](../../../language-support.md?tabs=stt).
270270

271+
## Language identification
272+
273+
You can use [language identification](../../../language-identification.md?pivots=programming-language-csharp#speech-to-text) with Speech-to-text recognition when you need to identify the language in an audio source and then transcribe it to text.
274+
275+
For a complete code sample, see [language identification](../../../language-identification.md?pivots=programming-language-csharp#speech-to-text).
271276

272277
## Use a custom endpoint
273278

articles/cognitive-services/Speech-Service/includes/how-to/recognize-speech/java.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -211,6 +211,12 @@ config.setSpeechRecognitionLanguage("fr-FR");
211211

212212
[`setSpeechRecognitionLanguage`](/java/api/com.microsoft.cognitiveservices.speech.speechconfig.setspeechrecognitionlanguage) is a parameter that takes a string as an argument. Refer to the [list of supported speech-to-text locales](../../../language-support.md?tabs=stt).
213213

214+
## Language identification
215+
216+
You can use [language identification](../../../language-identification.md?pivots=programming-language-java#speech-to-text) with Speech-to-text recognition when you need to identify the language in an audio source and then transcribe it to text.
217+
218+
For a complete code sample, see [language identification](../../../language-identification.md?pivots=programming-language-java#speech-to-text).
219+
214220
## Use a custom endpoint
215221

216222
With [Custom Speech](../../../custom-speech-overview.md), you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint. The following example shows how to set a custom endpoint.

articles/cognitive-services/Speech-Service/includes/how-to/recognize-speech/javascript.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -191,6 +191,12 @@ speechConfig.speechRecognitionLanguage = "it-IT";
191191

192192
The [`speechRecognitionLanguage`](/javascript/api/microsoft-cognitiveservices-speech-sdk/speechconfig#speechrecognitionlanguage) property expects a language-locale format string. Refer to the [list of supported speech-to-text locales](../../../language-support.md?tabs=stt).
193193

194+
## Language identification
195+
196+
You can use [language identification](../../../language-identification.md?pivots=programming-language-javascript#speech-to-text) with Speech-to-text recognition when you need to identify the language in an audio source and then transcribe it to text.
197+
198+
For a complete code sample, see [language identification](../../../language-identification.md?pivots=programming-language-javascript#speech-to-text).
199+
194200
## Use a custom endpoint
195201

196202
With [Custom Speech](../../../custom-speech-overview.md), you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint. The following example shows how to set a custom endpoint.

articles/cognitive-services/Speech-Service/includes/how-to/recognize-speech/python.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -158,6 +158,12 @@ speech_config.speech_recognition_language="de-DE"
158158

159159
[`speech_recognition_language`](/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig#speech-recognition-language) is a parameter that takes a string as an argument. Refer to the [list of supported speech-to-text locales](../../../language-support.md?tabs=stt).
160160

161+
## Language identification
162+
163+
You can use [language identification](../../../language-identification.md?pivots=programming-language-python#speech-to-text) with Speech-to-text recognition when you need to identify the language in an audio source and then transcribe it to text.
164+
165+
For a complete code sample, see [language identification](../../../language-identification.md?pivots=programming-language-python#speech-to-text).
166+
161167
## Use a custom endpoint
162168

163169
With [Custom Speech](../../../custom-speech-overview.md), you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint. The following example shows how to set a custom endpoint.

articles/cognitive-services/Speech-Service/includes/how-to/translate-speech/cpp.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -299,7 +299,7 @@ For more information about speech synthesis, see [the basics of speech synthesis
299299

300300
## Multilingual translation with language identification
301301

302-
In many scenarios, you might not know which input languages to specify. Using language identification you can detect up to 10 possible input languages and automatically translate to your target languages.
302+
In many scenarios, you might not know which input languages to specify. Using [language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-translation) you can detect up to 10 possible input languages and automatically translate to your target languages.
303303

304304
The following example uses continuous translation from an audio file. When you run the sample, `en-US` and `zh-CN` will be automatically detected because they're defined in `AutoDetectSourceLanguageConfig`. Then, the speech will be translated to `de` and `fr` as specified in the calls to `AddTargetLanguage()`.
305305

@@ -310,7 +310,7 @@ auto autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig::FromLangua
310310
auto translationRecognizer = TranslationRecognizer::FromConfig(speechTranslationConfig, autoDetectSourceLanguageConfig, audioConfig);
311311
```
312312
313-
For the complete code sample, see [language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-translation).
313+
For a complete code sample, see [language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-translation).
314314
315315
[speechtranslationconfig]: /cpp/cognitive-services/speech/translation-speechtranslationconfig
316316
[audioconfig]: /cpp/cognitive-services/speech/audio-audioconfig

articles/cognitive-services/Speech-Service/includes/how-to/translate-speech/csharp.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -306,7 +306,7 @@ For more information about speech synthesis, see [the basics of speech synthesis
306306

307307
## Multi-lingual translation with language identification
308308

309-
In many scenarios, you might not know which input languages to specify. Using language identification you can detect up to 10 possible input languages and automatically translate to your target languages.
309+
In many scenarios, you might not know which input languages to specify. Using [language identification](../../../language-identification.md?pivots=programming-language-csharp#speech-translation) you can detect up to 10 possible input languages and automatically translate to your target languages.
310310

311311
The following example uses continuous translation from an audio file. When you run the sample, `en-US` and `zh-CN` will be automatically detected because they're defined in `AutoDetectSourceLanguageConfig`. Then, the speech will be translated to `de` and `fr` as specified in the calls to `AddTargetLanguage()`.
312312

@@ -317,7 +317,7 @@ var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguage
317317
var translationRecognizer = new TranslationRecognizer(speechTranslationConfig, autoDetectSourceLanguageConfig, audioConfig)
318318
```
319319

320-
For the complete code sample, see [language identification](../../../language-identification.md?pivots=programming-language-csharp#speech-translation).
320+
For a complete code sample, see [language identification](../../../language-identification.md?pivots=programming-language-csharp#speech-translation).
321321

322322
[speechtranslationconfig]: /dotnet/api/microsoft.cognitiveservices.speech.speechtranslationconfig
323323
[audioconfig]: /dotnet/api/microsoft.cognitiveservices.speech.audio.audioconfig

articles/cognitive-services/Speech-Service/includes/how-to/translate-speech/python.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -291,6 +291,12 @@ translate_speech_to_text()
291291

292292
For more information about speech synthesis, see [the basics of speech synthesis](../../../get-started-text-to-speech.md).
293293

294+
## Multi-lingual translation with language identification
295+
296+
In many scenarios, you might not know which input languages to specify. Using [language identification](../../../language-identification.md?pivots=programming-language-python#speech-translation) you can detect up to 10 possible input languages and automatically translate to your target languages.
297+
298+
For a complete code sample, see [language identification](../../../language-identification.md?pivots=programming-language-python#speech-translation).
299+
294300
[speechtranslationconfig]: /python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.translation.speechtranslationconfig
295301
[audioconfig]: /python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.audio.audioconfig
296302
[translationrecognizer]: /python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.translation.translationrecognizer

0 commit comments

Comments
 (0)