You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/embedded-speech.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -166,7 +166,7 @@ All text to speech locales [here](language-support.md?tabs=tts) (except fa-IR, P
166
166
167
167
## Embedded speech configuration
168
168
169
-
For cloud connected applications, as shown in most Speech SDK samples, you use the `SpeechConfig` object with an API key and region. For embedded speech, you don't use an AI Services resource for Speech. Instead of a cloud resource, you use the [models and voices](#models-and-voices) that you download to your local device.
169
+
For cloud connected applications, as shown in most Speech SDK samples, you use the `SpeechConfig` object with an API key and endpoint. For embedded speech, you don't use an AI Services resource for Speech. Instead of a cloud resource, you use the [models and voices](#models-and-voices) that you download to your local device.
170
170
171
171
Use the `EmbeddedSpeechConfig` object to set the location of the models or voices. If your application is used for both speech to text and text to speech, you can use the same `EmbeddedSpeechConfig` object to set the location of the models and voices.
For ```SpeechRecognizer```, ```SpeechSynthesizer```, ```IntentRecognizer```, ```ConversationTranscriber``` objects, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechConfig``` object.
181
+
For ```SpeechRecognizer```, ```SourceLanguageRecognizer```, ```ConversationTranscriber``` objects, use an appropriate instance of [TokenCredential](https://learn.microsoft.com/dotnet/api/azure.core.tokencredential) for authentication, along with the endpoint that includes your [custom domain](https://learn.microsoft.com/azure/ai-services/speech-service/speech-services-private-link?tabs=portal#create-a-custom-domain-name), to create a ```SpeechConfig``` object.
For ```TranslationRecognizer``` object, use an appropriate instance of [TokenCredential](https://learn.microsoft.com/dotnet/api/azure.core.tokencredential) for authentication, along with the endpoint that includes your [custom domain](https://learn.microsoft.com/azure/ai-services/speech-service/speech-services-private-link?tabs=portal#create-a-custom-domain-name), to create a ```SpeechTranslationConfig``` object.
For ```SpeechSynthesizer```, ```IntentRecognizer``` objects, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechConfig``` object.
181
210
182
-
::: zone pivot="programming-language-csharp"
183
211
```C#
184
212
stringresourceId="Your Resource ID";
185
213
stringaadToken="Your Microsoft Entra access token";
@@ -192,6 +220,10 @@ var speechConfig = SpeechConfig.FromAuthorizationToken(authorizationToken, regio
For ```SpeechRecognizer```, ```SpeechSynthesizer```, ```IntentRecognizer```, ```ConversationTranscriber``` objects, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechConfig``` object.
226
+
195
227
```C++
196
228
std::string resourceId = "Your Resource ID";
197
229
std::string aadToken = "Your Microsoft Entra access token";
@@ -204,6 +236,10 @@ auto speechConfig = SpeechConfig::FromAuthorizationToken(authorizationToken, reg
For ```SpeechRecognizer```, ```SpeechSynthesizer```, ```IntentRecognizer```, ```ConversationTranscriber``` objects, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechConfig``` object.
For ```SpeechRecognizer```, ```SpeechSynthesizer```, ```IntentRecognizer```, ```ConversationTranscriber``` objects, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechConfig``` object.
For the ```TranslationRecognizer```, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechTranslationConfig``` object.
230
271
231
-
::: zone pivot="programming-language-csharp"
232
-
```C#
233
-
stringresourceId="Your Resource ID";
234
-
stringaadToken="Your Microsoft Entra access token";
235
-
stringregion="Your Speech Region";
236
-
237
-
// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and Microsoft Entra access token.
std::string aadToken = "Your Microsoft Entra access token";
@@ -253,6 +281,10 @@ auto speechConfig = SpeechTranslationConfig::FromAuthorizationToken(authorizatio
253
281
::: zone-end
254
282
255
283
::: zone pivot="programming-language-java"
284
+
### TranslationRecognizer
285
+
286
+
For the ```TranslationRecognizer```, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechTranslationConfig``` object.
For the ```TranslationRecognizer```, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechTranslationConfig``` object.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/how-to/recognize-speech/cpp.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,14 +14,14 @@ ms.author: eur
14
14
15
15
To call the Speech service using the Speech SDK, you need to create a [`SpeechConfig`](/cpp/cognitive-services/speech/speechconfig) instance. This class includes information about your subscription, like your key and associated region, endpoint, host, or authorization token.
16
16
17
-
1. Create an AI Services resource for Speech in the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesAIServices). Get the Speech resource key and region.
18
-
1. Create a `SpeechConfig` instance by using the following code. Replace `YourSpeechKey` and `YourSpeechRegion` with your Speech resource key and region.
17
+
1. Create an AI Services resource for Speech in the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesAIServices). Get the Speech resource key and endpoint.
18
+
1. Create a `SpeechConfig` instance by using the following code. Replace `YourSpeechKey` and `YourSpeechEndpoint` with your Speech resource key and endpoint.
<< "CANCELED: Did you set the speech resource key and region values?" << std::endl;
161
+
<< "CANCELED: Did you set the speech resource key and endpoint values?" << std::endl;
162
162
163
163
recognitionEnd.set_value(); // Notify to stop recognition.
164
164
}
@@ -205,14 +205,14 @@ For a complete code sample, see [Language identification](../../../language-iden
205
205
With [custom speech](../../../custom-speech-overview.md), you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint. The following example shows how to set a custom endpoint.
206
206
207
207
```cpp
208
-
auto speechConfig = SpeechConfig::FromSubscription("YourSubscriptionKey", "YourServiceRegion");
208
+
auto speechConfig = SpeechConfig::FromEndpoint("YourServiceEndpoint", "YourSubscriptionKey");
209
209
speechConfig->SetEndpointId("YourEndpointId");
210
210
auto speechRecognizer = SpeechRecognizer::FromConfig(speechConfig);
211
211
```
212
212
213
213
## Run and use a container
214
214
215
-
Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and region.
215
+
Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and endpoint.
216
216
217
217
For more information about containers, see Host URLs in [Install and run Speech containers with Docker](../../../speech-container-howto.md#host-urls).
0 commit comments