You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For ```SpeechRecognizer```, ```SpeechSynthesizer```, ```IntentRecognizer```, ```ConversationTranscriber``` objects, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechConfig``` object.
242
-
243
-
```Java
244
-
String resourceId ="Your Resource ID";
245
-
String region ="Your Region";
246
-
247
-
// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and Microsoft Entra access token.
For ```SpeechRecognizer```, ```SpeechSynthesizer```, ```IntentRecognizer```, ```ConversationTranscriber``` objects, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechConfig``` object.
257
-
258
-
```Python
259
-
resourceId ="Your Resource ID"
260
-
region ="Your Region"
261
-
# You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and Microsoft Entra access token.
For the ```TranslationRecognizer```, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechTranslationConfig``` object.
@@ -278,35 +247,91 @@ std::string region = "Your Speech Region";
278
247
auto authorizationToken = "aad#" + resourceId + "#" + aadToken;
279
248
auto speechConfig = SpeechTranslationConfig::FromAuthorizationToken(authorizationToken, region);
280
249
```
250
+
281
251
::: zone-end
282
252
283
253
::: zone pivot="programming-language-java"
254
+
### SpeechRecognizer, ConversationTranscriber
255
+
256
+
For ```SpeechRecognizer```, ```ConversationTranscriber``` objects, use an appropriate instance of [TokenCredential](/dotnet/api/azure.core.tokencredential) for authentication, along with the endpoint that includes your [custom domain](/azure/ai-services/speech-service/speech-services-private-link?tabs=portal#create-a-custom-domain-name), to create a ```SpeechConfig``` object.
For the ```TranslationRecognizer```, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechTranslationConfig``` object.
270
+
For ```TranslationRecognizer``` object, use an appropriate instance of [TokenCredential](/dotnet/api/azure.core.tokencredential) for authentication, along with the endpoint that includes your [custom domain](/azure/ai-services/speech-service/speech-services-private-link?tabs=portal#create-a-custom-domain-name), to create a ```SpeechTranslationConfig``` object.
For ```SpeechSynthesizer```, ```IntentRecognizer``` objects, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechConfig``` object.
287
285
288
286
```Java
289
287
String resourceId ="Your Resource ID";
290
288
String region ="Your Region";
291
289
292
290
// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and Microsoft Entra access token.
For ```SpeechRecognizer```, ```ConversationTranscriber``` objects, use an appropriate instance of [TokenCredential](/dotnet/api/azure.core.tokencredential) for authentication, along with the endpoint that includes your [custom domain](/azure/ai-services/speech-service/speech-services-private-link?tabs=portal#create-a-custom-domain-name), to create a ```SpeechConfig``` object.
For the ```TranslationRecognizer```, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechTranslationConfig``` object.
313
+
For ```TranslationRecognizer```object, use an appropriate instance of [TokenCredential](/dotnet/api/azure.core.tokencredential) for authentication, along with the endpoint that includes your [custom domain](/azure/ai-services/speech-service/speech-services-private-link?tabs=portal#create-a-custom-domain-name), to create a ```SpeechTranslationConfig``` object.
314
+
315
+
```Python
316
+
browserCredential= InteractiveBrowserCredential()
317
+
318
+
// Define the custom domain endpoint for your Speech resource
For ```SpeechSynthesizer```, ```IntentRecognizer``` objects, build the authorization token from the resource IDand the Microsoft Entra access token and then use it to create a ```SpeechConfig```object.
302
328
303
329
```Python
304
330
resourceId="Your Resource ID"
305
331
region="Your Region"
306
-
307
332
# You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and Microsoft Entra access token.
0 commit comments