Skip to content

Commit 6bd2246

Browse files
committed
Opening a new PR to fix merge issues in Susan's PR.
1 parent 5ba4b12 commit 6bd2246

File tree

3 files changed

+72
-18
lines changed

3 files changed

+72
-18
lines changed

articles/cognitive-services/Speech-Service/how-to-automatic-language-detection.md

Lines changed: 35 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: speech-service
1010
ms.topic: conceptual
11-
ms.date: 03/16/2020
11+
ms.date: 05/19/2020
1212
ms.author: trbye
13-
zone_pivot_groups: programming-languages-set-two
13+
zone_pivot_groups: programming-languages-set-nineteen
1414
---
1515

1616
# Automatic language detection for speech to text
@@ -20,7 +20,7 @@ Automatic language detection is used to determine the most likely match for audi
2020
In this article, you'll learn how to use `AutoDetectSourceLanguageConfig` to construct a `SpeechRecognizer` object and retrieve the detected language.
2121

2222
> [!IMPORTANT]
23-
> This feature is only available for the Speech SDK for C#, C++, Java and Python.
23+
> This feature is only available for the Speech SDK with C#, C++, Java, Python, and Objective-C.
2424
2525
## Automatic language detection with the Speech SDK
2626

@@ -113,6 +113,23 @@ detected_language = auto_detect_source_language_result.language
113113

114114
::: zone-end
115115

116+
::: zone pivot="programming-language-objectivec"
117+
118+
```Objective-C
119+
NSArray *languages = @[@"zh-CN", @"de-DE"];
120+
SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
121+
[[SPXAutoDetectSourceLanguageConfiguration alloc]init:languages];
122+
SPXSpeechRecognizer* speechRecognizer = \
123+
[[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig
124+
autoDetectSourceLanguageConfiguration:autoDetectSourceLanguageConfig
125+
audioConfiguration:audioConfig];
126+
SPXSpeechRecognitionResult *result = [speechRecognizer recognizeOnce];
127+
SPXAutoDetectSourceLanguageResult *languageDetectionResult = [[SPXAutoDetectSourceLanguageResult alloc] init:result];
128+
NSString *detectedLanguage = [languageDetectionResult language];
129+
```
130+
131+
::: zone-end
132+
116133
## Use a custom model for automatic language detection
117134
118135
In addition to language detection using Speech service models, you can specify a custom model for enhanced recognition. If a custom model isn't provided, the service will use the default language model.
@@ -177,6 +194,20 @@ AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig =
177194

178195
::: zone-end
179196

197+
::: zone pivot="programming-language-objectivec"
198+
199+
```Objective-C
200+
SPXSourceLanguageConfiguration* enLanguageConfig = [[SPXSourceLanguageConfiguration alloc]init:@"en-US"];
201+
SPXSourceLanguageConfiguration* frLanguageConfig = \
202+
[[SPXSourceLanguageConfiguration alloc]initWithLanguage:@"fr-FR"
203+
endpointId:@"The Endpoint Id for custom model of fr-FR"];
204+
NSArray *languageConfigs = @[enLanguageConfig, frLanguageConfig];
205+
SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \
206+
[[SPXAutoDetectSourceLanguageConfiguration alloc]initWithSourceLanguageConfigurations:languageConfigs];
207+
```
208+
209+
::: zone-end
210+
180211
## Next steps
181212
182-
- [Speech SDK reference documentation](speech-sdk.md)
213+
- [Speech SDK reference documentation](speech-sdk.md)

articles/cognitive-services/Speech-Service/how-to-specify-source-language.md

Lines changed: 23 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: speech-service
1010
ms.topic: conceptual
11-
ms.date: 01/07/2020
11+
ms.date: 05/19/2020
1212
ms.author: qiohu
13-
zone_pivot_groups: programming-languages-speech-services-nomore-variant
13+
zone_pivot_groups: programming-languages-set-two
1414
---
1515

1616
# Specify source language for speech to text
@@ -137,9 +137,9 @@ speech_recognizer = speechsdk.SpeechRecognizer(
137137
138138
::: zone-end
139139

140-
::: zone pivot="programming-language-javascript"
140+
::: zone pivot="programming-language-more"
141141

142-
## How to specify source language in JavaScript
142+
## How to specify source language in Javascript
143143

144144
The first step is to create a `SpeechConfig`:
145145

@@ -158,30 +158,39 @@ If you're using a custom model for recognition, you can specify the endpoint wit
158158
```Javascript
159159
speechConfig.endpointId = "The Endpoint ID for your custom model.";
160160
```
161-
::: zone-end
162-
163-
::: zone pivot="programming-language-objectivec"
164161

165162
## How to specify source language in Objective-C
166163

167-
The first step is to create a `speechConfig`:
164+
In this example, the source language is provided explicitly as a parameter using `SPXSpeechRecognizer` construct.
168165

169166
```Objective-C
170-
SPXSpeechConfiguration *speechConfig = [[SPXSpeechConfiguration alloc] initWithSubscription:@"YourSubscriptionkey" region:@"YourRegion"];
167+
SPXSpeechRecognizer* speechRecognizer = \
168+
[[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig language:@"de-DE" audioConfiguration:audioConfig];
171169
```
172170
173-
Next, specify the source language of your audio with `speechRecognitionLanguage`:
171+
In this example, the source language is provided using `SPXSourceLanguageConfiguration`. Then, the `SPXSourceLanguageConfiguration` is passed as a parameter to `SPXSpeechRecognizer` construct.
174172
175173
```Objective-C
176-
speechConfig.speechRecognitionLanguage = @"de-DE";
174+
SPXSourceLanguageConfiguration* sourceLanguageConfig = [[SPXSourceLanguageConfiguration alloc]init:@"de-DE"];
175+
SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig
176+
sourceLanguageConfiguration:sourceLanguageConfig
177+
audioConfiguration:audioConfig];
177178
```
178179

179-
If you're using a custom model for recognition, you can specify the endpoint with `endpointId`:
180+
In this example, the source language and custom endpoint are provided using `SPXSourceLanguageConfiguration`. Then, the `SPXSourceLanguageConfiguration` is passed as a parameter to `SPXSpeechRecognizer` construct.
180181

181182
```Objective-C
182-
speechConfig.endpointId = @"The Endpoint ID for your custom model.";
183+
SPXSourceLanguageConfiguration* sourceLanguageConfig = \
184+
[[SPXSourceLanguageConfiguration alloc]initWithLanguage:@"de-DE"
185+
endpointId:@"The Endpoint ID for your custom model."];
186+
SPXSpeechRecognizer* speechRecognizer = [[SPXSpeechRecognizer alloc] initWithSpeechConfiguration:speechConfig
187+
sourceLanguageConfiguration:sourceLanguageConfig
188+
audioConfiguration:audioConfig];
183189
```
184190
191+
>[!Note]
192+
> `speechRecognitionLanguage` and `endpointId` properties are deprecated from the `SPXSpeechConfiguration` class in Objective-C. The use of these properties are discouraged, and shouldn't be used when constructing a `SPXSpeechRecognizer`.
193+
185194
::: zone-end
186195
187196
## See also
@@ -190,4 +199,4 @@ speechConfig.endpointId = @"The Endpoint ID for your custom model.";
190199
191200
## Next steps
192201
193-
* [Speech SDK reference documentation](speech-sdk.md)
202+
* [Speech SDK reference documentation](speech-sdk.md)

articles/zone-pivot-groups.yml

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -417,3 +417,17 @@ groups:
417417
title: JavaScript
418418
- id: programming-language-objectivec
419419
title: Objective-C
420+
- id: programming-languages-set-nineteen
421+
title: Programming languages
422+
prompt: Choose a programming language
423+
pivots:
424+
- id: programming-language-csharp
425+
title: C#
426+
- id: programming-language-cpp
427+
title: C++
428+
- id: programming-language-java
429+
title: Java
430+
- id: programming-language-python
431+
title: Python
432+
- id: programming-language-objectivec
433+
title: Objective-C

0 commit comments

Comments
 (0)