You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -23,7 +23,7 @@ For more information on environment variables, see [Environment variables and ap
23
23
24
24
## Create a speech translation configuration
25
25
26
-
To call the Speech service by using the Speech SDK, you need to create a [`SpeechTranslationConfig`][config] instance. This class includes information about your subscription, like your key and associated region, endpoint, host, or authorization token.
26
+
To call the Speech service by using the Speech SDK, you need to create a [`SpeechTranslationConfig`][speechtranslationconfig] instance. This class includes information about your subscription, like your key and associated region, endpoint, host, or authorization token.
27
27
28
28
> [!TIP]
29
29
> Regardless of whether you're performing speech recognition, speech synthesis, translation, or intent recognition, you'll always create a configuration.
@@ -42,7 +42,7 @@ auto SPEECH__SUBSCRIPTION__KEY = getenv("SPEECH__SUBSCRIPTION__KEY");
42
42
auto SPEECH__SERVICE__REGION = getenv("SPEECH__SERVICE__REGION");
// Translate to languages. See https://aka.ms/speech/sttt-languages
84
-
translationConfig->AddTargetLanguage("fr");
85
-
translationConfig->AddTargetLanguage("de");
83
+
speechTranslationConfig->AddTargetLanguage("fr");
84
+
speechTranslationConfig->AddTargetLanguage("de");
86
85
}
87
86
```
88
87
89
88
With every call to [`AddTargetLanguage`][addlang], a new target translation language is specified. In other words, when speech is recognized from the source language, each target translation is available as part of the resulting translation operation.
90
89
91
90
## Initialize a translation recognizer
92
91
93
-
After you've created a [`SpeechTranslationConfig`][config] instance, the next step is to initialize [`TranslationRecognizer`][recognizer]. When you initialize `TranslationRecognizer`, you need to pass it your `translationConfig` instance. The configuration object provides the credentials that the Speech service requires to validate your request.
92
+
After you've created a [`SpeechTranslationConfig`][speechtranslationconfig] instance, the next step is to initialize [`TranslationRecognizer`][translationrecognizer]. When you initialize `TranslationRecognizer`, you need to pass it your `translationConfig` instance. The configuration object provides the credentials that the Speech service requires to validate your request.
94
93
95
94
If you're recognizing speech by using your device's default microphone, here's what `TranslationRecognizer` should look like:
auto audioConfig = AudioConfig::FromDefaultMicrophoneInput();
133
-
auto recognizer = TranslationRecognizer::FromConfig(translationConfig, audioConfig);
132
+
auto translationRecognizer = TranslationRecognizer::FromConfig(translationConfig, audioConfig);
134
133
}
135
134
```
136
135
137
136
If you want to provide an audio file instead of using a microphone, you still need to provide an `audioConfig` parameter. However, when you create an `AudioConfig` class instance, instead of calling `FromDefaultMicrophoneInput`, you call `FromWavFileInput` and pass the `filename` parameter:
@@ -196,26 +195,25 @@ After a successful speech recognition and translation, the result contains all t
196
195
197
196
The `TranslationRecognizer` object exposes a `Synthesizing` event. The event fires several times and provides a mechanism to retrieve the synthesized audio from the translation recognition result. If you're translating to multiple languages, see [Manual synthesis](#manual-synthesis).
198
197
199
-
Specify the synthesis voice by assigning a [`SetVoiceName`][voicename] instance, and provide an event handler for the `Synthesizing` event to get the audio. The following example saves the translated audio as a .wav file.
198
+
Specify the synthesis voice by assigning a [`SetVoiceName`][setvoicename] instance, and provide an event handler for the `Synthesizing` event to get the audio. The following example saves the translated audio as a .wav file.
200
199
201
200
> [!IMPORTANT]
202
-
> The event-based synthesis works only with a single translation. *Do not* add multiple target translation languages. Additionally, the [`SetVoiceName`][voicename] value should be the same language as the target translation language. For example, `"de"` could map to `"de-DE-Hedda"`.
201
+
> The event-based synthesis works only with a single translation. *Do not* add multiple target translation languages. Additionally, the [`SetVoiceName`][setvoicename] value should be the same language as the target translation language. For example, `"de"` could map to `"de-DE-Hedda"`.
@@ -302,109 +299,24 @@ For more information about speech synthesis, see [the basics of speech synthesis
302
299
303
300
## Multilingual translation with language identification
304
301
305
-
In many scenarios, you might not know which input languages to specify. Using language identification allows you to specify up to 10 possible input languages and automatically translate to your target languages.
302
+
In many scenarios, you might not know which input languages to specify. Using language identification you can detect up to 10 possible input languages and automatically translate to your target languages.
306
303
307
-
The following example uses continuous translation from an audio file. It automatically detects the input language, even if the language being spoken is changing. When you run the sample, `en-US` and `zh-CN` will be automatically detected because they're defined in `AutoDetectSourceLanguageConfig`. Then, the speech will be translated to `de` and `fr` as specified in the calls to `AddTargetLanguage()`.
308
-
309
-
> [!IMPORTANT]
310
-
> This feature is currently in **preview**.
304
+
The following example uses continuous translation from an audio file. When you run the sample, `en-US` and `zh-CN` will be automatically detected because they're defined in `AutoDetectSourceLanguageConfig`. Then, the speech will be translated to `de` and `fr` as specified in the calls to `AddTargetLanguage()`.
0 commit comments