You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
19
-
20
-
Embedded Speech is designed for on-device [speech to text](speech-to-text.md) and [text to speech](text-to-speech.md) scenarios where cloud connectivity is intermittent or unavailable. For example, you can use embedded speech in industrial equipment, a voice enabled air conditioning unit, or a car that might travel out of range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](../containers/disconnected-containers.md).
17
+
Embedded Speech is designed for on-device [speech to text](speech-to-text.md) and [text to speech](text-to-speech.md) scenarios where cloud connectivity is intermittent or unavailable. For example, you can use embedded speech in industrial equipment, a voice enabled air conditioning unit, or a car that might travel out of range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](../containers/disconnected-containers.md).
21
18
22
19
> [!IMPORTANT]
23
20
> Microsoft limits access to embedded speech. You can apply for access through the Azure AI Speech [embedded speech limited access review](https://aka.ms/csgate-embedded-speech). For more information, see [Limited access for embedded speech](/legal/cognitive-services/speech-service/embedded-speech/limited-access-embedded-speech?context=/azure/ai-services/speech-service/context/context).
@@ -109,7 +106,7 @@ For Java embedded applications, add [client-sdk-embedded](https://mvnrepository.
109
106
Follow these steps to install the Speech SDK for Java using Apache Maven:
@@ -167,13 +164,13 @@ For embedded speech, you need to download the speech recognition models for [spe
167
164
168
165
The following [speech to text](speech-to-text.md) models are available: da-DK, de-DE, en-AU, en-CA, en-GB, en-IE, en-IN, en-NZ, en-US, es-ES, es-MX, fr-CA, fr-FR, it-IT, ja-JP, ko-KR, pt-BR, pt-PT, zh-CN, zh-HK, and zh-TW.
169
166
170
-
All text to speech locales [here](language-support.md?tabs=tts) (except fa-IR, Persian (Iran)) are available out of box with either 1 selected female and/or 1 selected male voices. We welcome your input to help us gauge demand for more languages and voices.
167
+
All text to speech locales [here](language-support.md?tabs=tts) (except fa-IR, Persian (Iran)) are available out of box with either 1 selected female and/or 1 selected male voices. We welcome your input to help us gauge demand for more languages and voices.
171
168
172
169
## Embedded speech configuration
173
170
174
-
For cloud connected applications, as shown in most Speech SDK samples, you use the `SpeechConfig` object with a Speech resource key and region. For embedded speech, you don't use a Speech resource. Instead of a cloud resource, you use the [models and voices](#models-and-voices) that you download to your local device.
171
+
For cloud connected applications, as shown in most Speech SDK samples, you use the `SpeechConfig` object with a Speech resource key and region. For embedded speech, you don't use a Speech resource. Instead of a cloud resource, you use the [models and voices](#models-and-voices) that you download to your local device.
175
172
176
-
Use the `EmbeddedSpeechConfig` object to set the location of the models or voices. If your application is used for both speech to text and text to speech, you can use the same `EmbeddedSpeechConfig` object to set the location of the models and voices.
173
+
Use the `EmbeddedSpeechConfig` object to set the location of the models or voices. If your application is used for both speech to text and text to speech, you can use the same `EmbeddedSpeechConfig` object to set the location of the models and voices.
177
174
178
175
::: zone pivot="programming-language-csharp"
179
176
@@ -186,7 +183,7 @@ var embeddedSpeechConfig = EmbeddedSpeechConfig.FromPaths(paths.ToArray());
186
183
187
184
// For speech to text
188
185
embeddedSpeechConfig.SetSpeechRecognitionModel(
189
-
"Microsoft Speech Recognizer en-US FP Model V8",
186
+
"Microsoft Speech Recognizer en-US FP Model V8",
190
187
Environment.GetEnvironmentVariable("MODEL_KEY"));
191
188
192
189
// For text to speech
@@ -211,7 +208,7 @@ auto embeddedSpeechConfig = EmbeddedSpeechConfig::FromPaths(paths);
211
208
212
209
// For speech to text
213
210
embeddedSpeechConfig->SetSpeechRecognitionModel((
214
-
"Microsoft Speech Recognizer en-US FP Model V8",
211
+
"Microsoft Speech Recognizer en-US FP Model V8",
215
212
GetEnvironmentVariable("MODEL_KEY"));
216
213
217
214
// For text to speech
@@ -234,7 +231,7 @@ var embeddedSpeechConfig = EmbeddedSpeechConfig.fromPaths(paths);
0 commit comments