Skip to content

Commit 7ed9e96

Browse files
committed
Use ZP instead of tabs.
1 parent 61a7e57 commit 7ed9e96

File tree

1 file changed

+41
-15
lines changed

1 file changed

+41
-15
lines changed

articles/cognitive-services/Speech-Service/how-to-choose-recognition-mode.md

Lines changed: 41 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,7 @@ ms.subservice: speech-service
1010
ms.topic: conceptual
1111
ms.date: 01/10/2020
1212
ms.author: dapine
13+
zone_pivot_groups: programming-languages-set-two
1314
---
1415

1516
# Choosing a speech recognition mode
@@ -22,45 +23,53 @@ If you want to process each utterance one "sentence" at a time, use the "recogni
2223

2324
At the end of one recognized utterance, the service stops processing audio from that request. The maximum limit for recognition is a sentence duration of 20 seconds.
2425

25-
# [C#](#tab/csharp)
26+
::: zone pivot="programming-language-csharp"
2627

2728
For more information on using the `RecognizeOnceAsync` function, see the [.NET Speech SDK docs](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechrecognizer.recognizeonceasync?view=azure-dotnet#Microsoft_CognitiveServices_Speech_SpeechRecognizer_RecognizeOnceAsync).
2829

2930
```csharp
3031
var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);
3132
```
3233

33-
# [C++](#tab/cpp)
34+
::: zone-end
35+
::: zone pivot="programming-language-cpp"
3436

3537
For more information on using the `RecognizeOnceAsync` function, see the [C++ Speech SDK docs](https://docs.microsoft.com/cpp/cognitive-services/speech/asyncrecognizer#recognizeonceasync).
3638

3739
```cpp
3840
auto result = recognize->RecognizeOnceAsync().get();
3941
```
4042

41-
# [Java](#tab/java)
43+
::: zone-end
44+
::: zone pivot="programming-language-java"
4245

4346
For more information on using the `recognizeOnceAsync` function, see the [Java Speech SDK docs](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.SpeechRecognizer.recognizeOnceAsync?view=azure-java-stable).
4447

4548
```java
4649
SpeechRecognitionResult result = recognizer.recognizeOnceAsync().get();
4750
```
4851

49-
# [Python](#tab/python)
52+
::: zone-end
53+
::: zone pivot="programming-language-python"
5054

5155
For more information on using the `recognize_once` function, see the [Python Speech SDK docs](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechrecognizer?view=azure-python#recognize-once------azure-cognitiveservices-speech-speechrecognitionresult).
5256

5357
```python
5458
result = speech_recognizer.recognize_once()
5559
```
5660

57-
***
61+
::: zone-end
62+
::: zone pivot="programming-language-more"
63+
64+
For additional languages, see the [Speech SDK reference docs](speech-to-text.md#speech-sdk-reference-docs).
65+
66+
::: zone-end
5867

5968
## Continuous
6069

6170
If you need long-running recognition, use the start and corresponding stop functions for continuous recognition. The start function will start and continue processing all utterances until you invoke the stop function, or until too much time in silence has passed. When using the continuous mode, be sure to register to the various events that will fire upon occurrence. For example, the "recognized" event fires when speech recognition occurs. You need to have an event handler in place to handle recognition. A limit of 10 minutes of total speech recognition time, per session is enforced by the Speech service.
6271

63-
# [C#](#tab/csharp)
72+
::: zone pivot="programming-language-csharp"
6473

6574
```csharp
6675
// Subscribe to event
@@ -80,7 +89,8 @@ await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false);
8089
await recognizer.StopContinuousRecognitionAsync().ConfigureAwait(false);
8190
```
8291

83-
# [C++](#tab/cpp)
92+
::: zone-end
93+
::: zone pivot="programming-language-cpp"
8494

8595
```cpp
8696
// Connect to event
@@ -100,7 +110,8 @@ recognizer->StartContinuousRecognitionAsync().get();
100110
recognizer->StopContinuousRecognitionAsync().get();
101111
```
102112

103-
# [Java](#tab/java)
113+
::: zone-end
114+
::: zone pivot="programming-language-java"
104115

105116
```java
106117
recognizer.recognized.addEventListener((s, e) -> {
@@ -117,7 +128,8 @@ recognizer.startContinuousRecognitionAsync().get();
117128
recognizer.stopContinuousRecognitionAsync().get();
118129
```
119130

120-
# [Python](#tab/python)
131+
::: zone-end
132+
::: zone pivot="programming-language-python"
121133

122134
```python
123135
def recognized_cb(evt):
@@ -134,13 +146,18 @@ speech_recognizer.start_continuous_recognition()
134146
speech_recognizer.stop_continuous_recognition()
135147
```
136148

137-
***
149+
::: zone-end
150+
::: zone pivot="programming-language-more"
151+
152+
For additional languages, see the [Speech SDK reference docs](speech-to-text.md#speech-sdk-reference-docs).
153+
154+
::: zone-end
138155

139156
## Dictation
140157

141158
When using continuous recognition, you can enable dictation processing by using the corresponding "enable dictation" function. This mode will cause the speech config instance to interpret word descriptions of sentence structures such as punctuation. For example, the utterance "Do you live in town question mark" would be interpreted as the text "Do you live in town?".
142159

143-
# [C#](#tab/csharp)
160+
::: zone pivot="programming-language-csharp"
144161

145162
For more information on using the `EnableDictation` function, see the [.NET Speech SDK docs](https://docs.microsoft.com/dotnet/api/microsoft.cognitiveservices.speech.speechconfig.enabledictation?view=azure-dotnet#Microsoft_CognitiveServices_Speech_SpeechConfig_EnableDictation).
146163

@@ -149,7 +166,8 @@ For more information on using the `EnableDictation` function, see the [.NET Spee
149166
SpeechConfig.EnableDictation();
150167
```
151168

152-
# [C++](#tab/cpp)
169+
::: zone-end
170+
::: zone pivot="programming-language-cpp"
153171

154172
For more information on using the `EnableDictation` function, see the [C++ Speech SDK docs](https://docs.microsoft.com/cpp/cognitive-services/speech/speechconfig#enabledictation).
155173

@@ -158,7 +176,8 @@ For more information on using the `EnableDictation` function, see the [C++ Speec
158176
SpeechConfig->EnableDictation();
159177
```
160178

161-
# [Java](#tab/java)
179+
::: zone-end
180+
::: zone pivot="programming-language-java"
162181

163182
For more information on using the `enableDictation` function, see the [Java Speech SDK docs](https://docs.microsoft.com/java/api/com.microsoft.cognitiveservices.speech.SpeechConfig.enableDictation?view=azure-java-stable).
164183

@@ -167,7 +186,8 @@ For more information on using the `enableDictation` function, see the [Java Spee
167186
SpeechConfig.enableDictation();
168187
```
169188

170-
# [Python](#tab/python)
189+
::: zone-end
190+
::: zone pivot="programming-language-python"
171191

172192
For more information on using the `enable_dictation` function, see the [Python Speech SDK docs](https://docs.microsoft.com/python/api/azure-cognitiveservices-speech/azure.cognitiveservices.speech.speechconfig?view=azure-python#enable-dictation--).
173193

@@ -176,7 +196,13 @@ For more information on using the `enable_dictation` function, see the [Python S
176196
SpeechConfig.enable_dictation()
177197
```
178198

179-
***
199+
::: zone-end
200+
::: zone pivot="programming-language-more"
201+
202+
For additional languages, see the [Speech SDK reference docs](speech-to-text.md#speech-sdk-reference-docs).
203+
204+
::: zone-end
205+
180206

181207
## Next steps
182208

0 commit comments

Comments
 (0)