You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For ```SpeechRecognizer```, ```SpeechSynthesizer```, ```IntentRecognizer```, ```ConversationTranscriber``` objects, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechConfig``` object.
242
-
243
-
```Java
244
-
String resourceId ="Your Resource ID";
245
-
String region ="Your Region";
246
-
247
-
// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and Microsoft Entra access token.
For ```SpeechRecognizer```, ```SpeechSynthesizer```, ```IntentRecognizer```, ```ConversationTranscriber``` objects, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechConfig``` object.
257
-
258
-
```Python
259
-
resourceId ="Your Resource ID"
260
-
region ="Your Region"
261
-
# You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and Microsoft Entra access token.
For the ```TranslationRecognizer```, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechTranslationConfig``` object.
@@ -278,35 +247,91 @@ std::string region = "Your Speech Region";
278
247
auto authorizationToken = "aad#" + resourceId + "#" + aadToken;
279
248
auto speechConfig = SpeechTranslationConfig::FromAuthorizationToken(authorizationToken, region);
280
249
```
250
+
281
251
::: zone-end
282
252
283
253
::: zone pivot="programming-language-java"
254
+
### SpeechRecognizer, ConversationTranscriber
255
+
256
+
For ```SpeechRecognizer```, ```ConversationTranscriber``` objects, use an appropriate instance of [TokenCredential](/dotnet/api/azure.core.tokencredential) for authentication, along with the endpoint that includes your [custom domain](/azure/ai-services/speech-service/speech-services-private-link?tabs=portal#create-a-custom-domain-name), to create a ```SpeechConfig``` object.
For the ```TranslationRecognizer```, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechTranslationConfig``` object.
270
+
For ```TranslationRecognizer``` object, use an appropriate instance of [TokenCredential](/dotnet/api/azure.core.tokencredential) for authentication, along with the endpoint that includes your [custom domain](/azure/ai-services/speech-service/speech-services-private-link?tabs=portal#create-a-custom-domain-name), to create a ```SpeechTranslationConfig``` object.
For ```SpeechSynthesizer```, ```IntentRecognizer``` objects, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechConfig``` object.
287
285
288
286
```Java
289
287
String resourceId ="Your Resource ID";
290
288
String region ="Your Region";
291
289
292
290
// You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and Microsoft Entra access token.
For ```SpeechRecognizer```, ```ConversationTranscriber``` objects, use an appropriate instance of [TokenCredential](/dotnet/api/azure.core.tokencredential) for authentication, along with the endpoint that includes your [custom domain](/azure/ai-services/speech-service/speech-services-private-link?tabs=portal#create-a-custom-domain-name), to create a ```SpeechConfig``` object.
For the ```TranslationRecognizer```, build the authorization token from the resource ID and the Microsoft Entra access token and then use it to create a ```SpeechTranslationConfig``` object.
313
+
For ```TranslationRecognizer```object, use an appropriate instance of [TokenCredential](/dotnet/api/azure.core.tokencredential) for authentication, along with the endpoint that includes your [custom domain](/azure/ai-services/speech-service/speech-services-private-link?tabs=portal#create-a-custom-domain-name), to create a ```SpeechTranslationConfig``` object.
314
+
315
+
```Python
316
+
browserCredential= InteractiveBrowserCredential()
317
+
318
+
// Define the custom domain endpoint for your Speech resource
For ```SpeechSynthesizer```, ```IntentRecognizer``` objects, build the authorization token from the resource IDand the Microsoft Entra access token and then use it to create a ```SpeechConfig```object.
302
328
303
329
```Python
304
330
resourceId="Your Resource ID"
305
331
region="Your Region"
306
-
307
332
# You need to include the "aad#" prefix and the "#" (hash) separator between resource ID and Microsoft Entra access token.
You need to authenticate your application to access Azure AI services. This article shows you how to use environment variables to store your credentials. You can then access the environment variables from your code to authenticate your application. For production, use a more secure way to store and access your credentials.
To set the environment variables for your Speech resource key and endpoint, open a console window, and follow the instructions for your operating system and development environment.
14
+
15
+
- To set the `SPEECH_KEY` environment variable, replace *your-key* with one of the keys for your resource.
16
+
- To set the `ENDPOINT` environment variable, replace *your-endpoint* with one of the endpoints for your resource.
17
+
18
+
#### [Windows](#tab/windows)
19
+
20
+
```console
21
+
setx SPEECH_KEY your-key
22
+
setx ENDPOINT your-endpoint
23
+
```
24
+
25
+
> [!NOTE]
26
+
> If you only need to access the environment variables in the current console, you can set the environment variable with `set` instead of `setx`.
27
+
28
+
After you add the environment variables, you might need to restart any programs that need to read the environment variables, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before you run the example.
29
+
30
+
#### [Linux](#tab/linux)
31
+
32
+
##### Bash
33
+
34
+
Edit your *.bashrc* file, and add the environment variables:
35
+
36
+
```bash
37
+
export SPEECH_KEY=your-key
38
+
export ENDPOINT=your-endpoint
39
+
```
40
+
41
+
After you add the environment variables, run `source ~/.bashrc` from your console window to make the changes effective.
42
+
43
+
#### [macOS](#tab/macos)
44
+
45
+
##### Bash
46
+
47
+
Edit your *.bash_profile* file, and add the environment variables:
48
+
49
+
```bash
50
+
export SPEECH_KEY=your-key
51
+
export ENDPOINT=your-endpoint
52
+
```
53
+
54
+
After you add the environment variables, run `source ~/.bash_profile` from your console window to make the changes effective.
55
+
56
+
##### Xcode
57
+
58
+
For iOS and macOS development, you set the environment variables in Xcode. For example, follow these steps to set the environment variable in Xcode 13.4.1.
1. Select **Arguments** on the **Run** (Debug Run) page.
62
+
1. Under **Environment Variables** select the plus (+) sign to add a new environment variable.
63
+
1. Enter `SPEECH_KEY` for the **Name** and enter your Speech resource key for the **Value**.
64
+
65
+
To set the environment variable for your Speech resource endpoint, follow the same steps. Set `ENDPOINT` to the endpoint of your resource. For example, `https://YourServiceRegion.api.cognitive.microsoft.com`.
66
+
67
+
68
+
69
+
For more configuration options, see [the Xcode documentation](https://help.apple.com/xcode/#/dev745c5c974).
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/quickstarts/platform/java-android.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ Create a new project in Android Studio and add the Speech SDK for Java as a libr
36
36
1. Enter *samples.speech.cognitiveservices.microsoft.com* in the **Package name** text box.
37
37
1. Select a project directory in the **Save location** selection box.
38
38
1. Select **Java** in the **Language** selection box.
39
-
1. Select **API 23: Android 6.0 (Marshmallow)** in the **Minimum API level** selection box.
39
+
1. Select **API 26: Android 8.0 (Oreo)** in the **Minimum API level** selection box.
40
40
1. Select **Finish**.
41
41
42
42
Android Studio takes some time to prepare your new project. For your first time using Android Studio, it might take a few minutes to set preferences, accept licenses, and complete the wizard.
The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) and implements .NET Standard 2.0. You install the Speech SDK later in this guide. For other requirements, see [Install the Speech SDK](../../../quickstarts/setup-platform.md?pivots=programming-language-cpp).
std::cout << "CANCELED: Did you set the speech resource key and region values?" << std::endl;
90
+
std::cout << "CANCELED: Did you set the speech resource key and endpoint values?" << std::endl;
91
91
}
92
92
}
93
93
}
@@ -113,10 +113,10 @@ Follow these steps to create a console application and install the Speech SDK.
113
113
114
114
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/ai-services/speech-service/language-support.md). For example, use `es-ES` for Spanish (Spain). If you don't specify a language, the default is `en-US`. For details about how to identify one of multiple languages that might be spoken, see [Language identification](~/articles/ai-services/speech-service/language-identification.md).
115
115
116
-
1.[Build and run](/cpp/build/vscpp-step-2-build) your new console application to start speech recognition from a microphone.
116
+
1.To start speech recognition from a microphone, [Build and run](/cpp/build/vscpp-step-2-build) your new console application.
117
117
118
118
> [!IMPORTANT]
119
-
> Make sure that you set the `SPEECH_KEY` and `SPEECH_REGION`[environment variables](#set-environment-variables). If you don't set these variables, the sample fails with an error message.
119
+
> Make sure that you set the `SPEECH_KEY` and `ENDPOINT`[environment variables](#set-environment-variables). If you don't set these variables, the sample fails with an error message.
120
120
121
121
1. Speak into your microphone when prompted. What you speak should appear as text:
0 commit comments