You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/language-identification.md
+15-10Lines changed: 15 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,6 +26,11 @@ For speech recognition, the initial latency is higher with language identificati
26
26
27
27
## Configuration options
28
28
29
+
> [!IMPORTANT]
30
+
> You must make a code change when upgrading to the Speech SDK version 1.25 from earlier versions. With Speech SDK version 1.25, the
31
+
`SpeechServiceConnection_SingleLanguageIdPriority` and `SpeechServiceConnection_ContinuousLanguageIdPriority` properties have
32
+
been removed and replaced by a single property `SpeechServiceConnection_LanguageIdMode`. The Speech service will prioritize latency over accuracy for language identification.
33
+
29
34
Whether you use language identification with [speech-to-text](#speech-to-text) or with [speech translation](#speech-translation), there are some common concepts and configuration options.
30
35
31
36
- Define a list of [candidate languages](#candidate-languages) that you expect in the audio.
@@ -198,7 +203,7 @@ You use Speech-to-text recognition when you need to identify the language in an
198
203
199
204
::: zone pivot="programming-language-csharp"
200
205
201
-
See more examples of speech-to-text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/translation_samples.cs).
206
+
See more examples of speech-to-text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_with_language_id_samples.cs).
202
207
203
208
### [Recognize once](#tab/once)
204
209
@@ -208,7 +213,7 @@ using Microsoft.CognitiveServices.Speech.Audio;
208
213
209
214
var speechConfig = SpeechConfig.FromSubscription("YourSubscriptionKey","YourServiceRegion");
210
215
211
-
// Set the LanguageIdMode (optional, default Continuous, either Continuous or AtStart is accepted)
216
+
// Set the LanguageIdMode (Optional; Either Continuousor AtStart are accepted; Default AtStart)
0 commit comments