Skip to content

Commit c85b3aa

Browse files
committed
remove standalone LID from public preview
1 parent 7005e18 commit c85b3aa

File tree

1 file changed

+8
-63
lines changed

1 file changed

+8
-63
lines changed

articles/cognitive-services/Speech-Service/language-identification.md

Lines changed: 8 additions & 63 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: speech-service
1010
ms.topic: how-to
11-
ms.date: 09/16/2022
11+
ms.date: 01/11/2023
1212
ms.author: eur
1313
zone_pivot_groups: programming-languages-speech-services-nomore-variant
1414
---
@@ -19,15 +19,14 @@ Language identification is used to identify languages spoken in audio when compa
1919

2020
Language identification (LID) use cases include:
2121

22-
* [Standalone language identification](#standalone-language-identification) when you only need to identify the language in an audio source.
2322
* [Speech-to-text recognition](#speech-to-text) when you need to identify the language in an audio source and then transcribe it to text.
2423
* [Speech translation](#speech-translation) when you need to identify the language in an audio source and then translate it to another language.
2524

2625
Note that for speech recognition, the initial latency is higher with language identification. You should only include this optional feature as needed.
2726

2827
## Configuration options
2928

30-
Whether you use language identification [on its own](#standalone-language-identification), with [speech-to-text](#speech-to-text), or with [speech translation](#speech-translation), there are some common concepts and configuration options.
29+
Whether you use language identification with [speech-to-text](#speech-to-text) or with [speech translation](#speech-translation), there are some common concepts and configuration options.
3130

3231
- Define a list of [candidate languages](#candidate-languages) that you expect in the audio.
3332
- Decide whether to use [at-start or continuous](#at-start-and-continuous-language-identification) language identification.
@@ -111,13 +110,15 @@ You can choose to prioritize accuracy or latency with language identification.
111110
112111
> [!NOTE]
113112
> Latency is prioritized by default with the Speech SDK. You can choose to prioritize accuracy or latency with the Speech SDKs for C#, C++, Java ([for speech to text only](#speech-to-text)), and Python.
113+
114114
Prioritize `Latency` if you need a low-latency result such as during live streaming. Set the priority to `Accuracy` if the audio quality may be poor, and more latency is acceptable. For example, a voicemail could have background noise, or some silence at the beginning. Allowing the engine more time will improve language identification results.
115115
116116
* **At-start:** With at-start LID in `Latency` mode the result is returned in less than 5 seconds. With at-start LID in `Accuracy` mode the result is returned within 30 seconds. You set the priority for at-start LID with the `SpeechServiceConnection_SingleLanguageIdPriority` property.
117-
* **Continuous:** With continuous LID in `Latency` mode the results are returned every 2 seconds for the duration of the audio. With continuous LID in `Accuracy` mode the results are returned within no set time frame for the duration of the audio. You set the priority for continuous LID with the `SpeechServiceConnection_ContinuousLanguageIdPriority` property.
117+
* **Continuous:** With continuous LID in `Latency` mode the results are returned every 2 seconds for the duration of the audio. You set the priority for continuous LID with the `SpeechServiceConnection_ContinuousLanguageIdPriority` property.
118118
119119
> [!IMPORTANT]
120-
> With [speech-to-text](#speech-to-text) and [speech translation](#speech-translation) continuous recognition, do not set `Accuracy`with the SpeechServiceConnection_ContinuousLanguageIdPriority property. The setting will be ignored without error, and the default priority of `Latency` will remain in effect. Only [standalone language identification](#standalone-language-identification) supports continuous LID with `Accuracy` prioritization.
120+
> With [speech-to-text](#speech-to-text) and [speech translation](#speech-translation) continuous recognition, do not set `Accuracy`with the SpeechServiceConnection_ContinuousLanguageIdPriority property. The setting will be ignored without error, and the default priority of `Latency` will remain in effect.
121+
121122
Speech uses at-start LID with `Latency` prioritization by default. You need to set a priority property for any other LID configuration.
122123
123124
::: zone pivot="programming-language-csharp"
@@ -246,60 +247,6 @@ recognizer.stop_continuous_recognition()
246247
247248
::: zone-end
248249
249-
## Standalone language identification
250-
251-
You use standalone language identification when you only need to identify the language in an audio source.
252-
253-
> [!NOTE]
254-
> Standalone source language identification is only supported with the Speech SDKs for C#, C++, and Python.
255-
::: zone pivot="programming-language-csharp"
256-
257-
See more examples of standalone language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/standalone_language_detection_samples.cs).
258-
259-
### [Recognize once](#tab/once)
260-
261-
:::code language="csharp" source="~/samples-cognitive-services-speech-sdk/samples/csharp/sharedcontent/console/standalone_language_detection_samples.cs" id="languageDetectionInAccuracyWithFile":::
262-
263-
### [Continuous recognition](#tab/continuous)
264-
265-
:::code language="csharp" source="~/samples-cognitive-services-speech-sdk/samples/csharp/sharedcontent/console/standalone_language_detection_samples.cs" id="languageDetectionContinuousWithFile":::
266-
267-
---
268-
269-
::: zone-end
270-
271-
::: zone pivot="programming-language-cpp"
272-
273-
See more examples of standalone language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/standalone_language_detection_samples.cpp).
274-
275-
### [Recognize once](#tab/once)
276-
277-
:::code language="cpp" source="~/samples-cognitive-services-speech-sdk/samples/cpp/windows/console/samples/standalone_language_detection_samples.cpp" id="StandaloneLanguageDetectionWithMicrophone":::
278-
279-
### [Continuous recognition](#tab/continuous)
280-
281-
:::code language="cpp" source="~/samples-cognitive-services-speech-sdk/samples/cpp/windows/console/samples/standalone_language_detection_samples.cpp" id="StandaloneLanguageDetectionInContinuousModeWithFileInput":::
282-
283-
---
284-
285-
::: zone-end
286-
287-
::: zone pivot="programming-language-python"
288-
289-
See more examples of standalone language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_language_detection_sample.py).
290-
291-
### [Recognize once](#tab/once)
292-
293-
:::code language="python" source="~/samples-cognitive-services-speech-sdk/samples/python/console/speech_language_detection_sample.py" id="SpeechLanguageDetectionWithFile":::
294-
295-
### [Continuous recognition](#tab/continuous)
296-
297-
:::code language="python" source="~/samples-cognitive-services-speech-sdk/samples/python/console/speech_language_detection_sample.py" id="SpeechContinuousLanguageDetectionWithFile":::
298-
299-
---
300-
301-
::: zone-end
302-
303250
## Speech-to-text
304251
305252
You use Speech-to-text recognition when you need to identify the language in an audio source and then transcribe it to text. For more information, see [Speech-to-text overview](speech-to-text.md).
@@ -351,7 +298,6 @@ var endpointString = $"wss://{region}.stt.speech.microsoft.com/speech/universal/
351298
var endpointUrl = new Uri(endpointString);
352299
353300
var config = SpeechConfig.FromEndpoint(endpointUrl, "YourSubscriptionKey");
354-
// can switch "Latency" to "Accuracy" depending on priority
355301
config.SetProperty(PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency");
356302
357303
var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
@@ -498,7 +444,6 @@ result.close();
498444

499445
---
500446

501-
502447
::: zone-end
503448

504449
::: zone pivot="programming-language-python"
@@ -1112,8 +1057,8 @@ translation_config = speechsdk.translation.SpeechTranslationConfig(
11121057
target_languages=('de', 'fr'))
11131058
audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
11141059

1115-
# Set the Priority (optional, default Latency, either Latency or Accuracy is accepted)
1116-
translation_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_SingleLanguageIdPriority, value='Accuracy')
1060+
# Set the Priority
1061+
translation_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_SingleLanguageIdPriority, value='Latency')
11171062

11181063
# Specify the AutoDetectSourceLanguageConfig, which defines the number of possible languages
11191064
auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])

0 commit comments

Comments
 (0)