Skip to content

Commit 9bfa396

Browse files
authored
Merge pull request #217094 from eric-urban/eur/env-var
env vars for speech translation
2 parents fdb148c + 267fed3 commit 9bfa396

File tree

5 files changed

+58
-26
lines changed

5 files changed

+58
-26
lines changed

articles/cognitive-services/Speech-Service/includes/quickstarts/speech-translation-basics/cpp.md

Lines changed: 29 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,10 @@ ms.author: eur
1717
## Set up the environment
1818
The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) and implements .NET Standard 2.0. You install the Speech SDK later in this guide, but first check the [SDK installation guide](../../../quickstarts/setup-platform.md?pivots=programming-language-cpp) for any more requirements
1919

20+
### Set environment variables
21+
22+
[!INCLUDE [Environment variables](../../common/environment-variables.md)]
23+
2024
## Translate speech from a microphone
2125

2226
Follow these steps to create a new console application and install the Speech SDK.
@@ -30,18 +34,22 @@ Follow these steps to create a new console application and install the Speech SD
3034
3135
```cpp
3236
#include <iostream>
37+
#include <stdlib.h>
3338
#include <speechapi_cxx.h>
3439
3540
using namespace Microsoft::CognitiveServices::Speech;
3641
using namespace Microsoft::CognitiveServices::Speech::Audio;
3742
using namespace Microsoft::CognitiveServices::Speech::Translation;
3843
39-
auto YourSubscriptionKey = "YourSubscriptionKey";
40-
auto YourServiceRegion = "YourServiceRegion";
44+
std::string GetEnvironmentVariable(const char* name);
4145
4246
int main()
4347
{
44-
auto speechTranslationConfig = SpeechTranslationConfig::FromSubscription(YourSubscriptionKey, YourServiceRegion);
48+
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
49+
auto speechKey = GetEnvironmentVariable("SPEECH_KEY");
50+
auto speechRegion = GetEnvironmentVariable("SPEECH_REGION");
51+
52+
auto speechTranslationConfig = SpeechTranslationConfig::FromSubscription(speechKey, speechRegion);
4553
speechTranslationConfig->SetSpeechRecognitionLanguage("en-US");
4654
speechTranslationConfig->AddTargetLanguage("it");
4755
@@ -78,11 +86,26 @@ Follow these steps to create a new console application and install the Speech SD
7886
}
7987
}
8088
}
89+
90+
std::string GetEnvironmentVariable(const char* name)
91+
{
92+
#if defined(_MSC_VER)
93+
size_t requiredSize = 0;
94+
(void)getenv_s(&requiredSize, nullptr, 0, name);
95+
if (requiredSize == 0)
96+
{
97+
return "";
98+
}
99+
auto buffer = std::make_unique<char[]>(requiredSize);
100+
(void)getenv_s(&requiredSize, buffer.get(), requiredSize, name);
101+
return buffer.get();
102+
#else
103+
auto value = getenv(name);
104+
return value ? value : "";
105+
#endif
106+
}
81107
```
82108
83-
1. In `SpeechTranslation.cpp`, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
84-
> [!IMPORTANT]
85-
> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../../use-key-vault.md). See the Cognitive Services [security](../../../../cognitive-services-security.md) article for more information.
86109
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md#speech-to-text). Specify the full locale with a dash (`-`) separator. For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
87110
1. To change the translation target language, replace `it` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md#speech-translation). With few exceptions you only specify the language code that precedes the locale dash (`-`) separator. For example, use `es` for Spanish (Spain) instead of `es-ES`. The default language is `en` if you don't specify a language.
88111

articles/cognitive-services/Speech-Service/includes/quickstarts/speech-translation-basics/csharp.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,10 @@ ms.author: eur
1717
## Set up the environment
1818
The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) and implements .NET Standard 2.0. You install the Speech SDK later in this guide, but first check the [SDK installation guide](../../../quickstarts/setup-platform.md?pivots=programming-language-csharp) for any more requirements.
1919

20+
### Set environment variables
21+
22+
[!INCLUDE [Environment variables](../../common/environment-variables.md)]
23+
2024
## Translate speech from a microphone
2125

2226
Follow these steps to create a new console application and install the Speech SDK.
@@ -41,8 +45,9 @@ Follow these steps to create a new console application and install the Speech SD
4145
4246
class Program
4347
{
44-
static string YourSubscriptionKey = "YourSubscriptionKey";
45-
static string YourServiceRegion = "YourServiceRegion";
48+
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
49+
static string speechKey = Environment.GetEnvironmentVariable("SPEECH_KEY");
50+
static string speechRegion = Environment.GetEnvironmentVariable("SPEECH_REGION");
4651
4752
static void OutputSpeechRecognitionResult(TranslationRecognitionResult translationRecognitionResult)
4853
{
@@ -74,7 +79,7 @@ Follow these steps to create a new console application and install the Speech SD
7479
7580
async static Task Main(string[] args)
7681
{
77-
var speechTranslationConfig = SpeechTranslationConfig.FromSubscription(YourSubscriptionKey, YourServiceRegion);
82+
var speechTranslationConfig = SpeechTranslationConfig.FromSubscription(speechKey, speechRegion);
7883
speechTranslationConfig.SpeechRecognitionLanguage = "en-US";
7984
speechTranslationConfig.AddTargetLanguage("it");
8085
@@ -88,9 +93,6 @@ Follow these steps to create a new console application and install the Speech SD
8893
}
8994
```
9095
91-
1. In `Program.cs`, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
92-
> [!IMPORTANT]
93-
> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../../use-key-vault.md). See the Cognitive Services [security](../../../../cognitive-services-security.md) article for more information.
9496
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md#speech-to-text). Specify the full locale with a dash (`-`) separator. For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
9597
1. To change the translation target language, replace `it` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md#speech-translation). With few exceptions you only specify the language code that precedes the locale dash (`-`) separator. For example, use `es` for Spanish (Spain) instead of `es-ES`. The default language is `en` if you don't specify a language.
9698

articles/cognitive-services/Speech-Service/includes/quickstarts/speech-translation-basics/java.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,10 @@ Before you can do anything, you need to install the Speech SDK. The sample in th
6060
mvn clean dependency:copy-dependencies
6161
```
6262

63+
### Set environment variables
64+
65+
[!INCLUDE [Environment variables](../../common/environment-variables.md)]
66+
6367
## Translate speech from a microphone
6468

6569
Follow these steps to create a new console application for speech recognition.
@@ -77,11 +81,12 @@ Follow these steps to create a new console application for speech recognition.
7781
import java.util.Map;
7882

7983
public class SpeechTranslation {
80-
private static String YourSubscriptionKey = "YourSubscriptionKey";
81-
private static String YourServiceRegion = "YourServiceRegion";
84+
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
85+
private static String speechKey = System.getenv("SPEECH_KEY");
86+
private static String speechRegion = System.getenv("SPEECH_REGION");
8287

8388
public static void main(String[] args) throws InterruptedException, ExecutionException {
84-
SpeechTranslationConfig speechTranslationConfig = SpeechTranslationConfig.fromSubscription(YourSubscriptionKey, YourServiceRegion);
89+
SpeechTranslationConfig speechTranslationConfig = SpeechTranslationConfig.fromSubscription(speechKey, speechRegion);
8590
speechTranslationConfig.setSpeechRecognitionLanguage("en-US");
8691

8792
String[] toLanguages = { "it" };
@@ -125,9 +130,6 @@ Follow these steps to create a new console application for speech recognition.
125130
}
126131
```
127132

128-
1. In `SpeechTranslation.java`, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
129-
> [!IMPORTANT]
130-
> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../../use-key-vault.md). See the Cognitive Services [security](../../../../cognitive-services-security.md) article for more information.
131133
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md#speech-to-text). Specify the full locale with a dash (`-`) separator. For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
132134
1. To change the translation target language, replace `it` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md#speech-translation). With few exceptions you only specify the language code that precedes the locale dash (`-`) separator. For example, use `es` for Spanish (Spain) instead of `es-ES`. The default language is `en` if you don't specify a language.
133135

articles/cognitive-services/Speech-Service/includes/quickstarts/speech-translation-basics/javascript.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,10 @@ ms.author: eur
1818

1919
Before you can do anything, you need to install the Speech SDK for JavaScript. If you just want the package name to install, run `npm install microsoft-cognitiveservices-speech-sdk`. For guided installation instructions, see the [SDK installation guide](../../../quickstarts/setup-platform.md?pivots=programming-language-javascript).
2020

21+
### Set environment variables
22+
23+
[!INCLUDE [Environment variables](../../common/environment-variables.md)]
24+
2125
## Translate speech from a file
2226

2327
Follow these steps to create a Node.js console application for speech recognition.
@@ -32,7 +36,9 @@ Follow these steps to create a Node.js console application for speech recognitio
3236
```javascript
3337
const fs = require("fs");
3438
const sdk = require("microsoft-cognitiveservices-speech-sdk");
35-
const speechTranslationConfig = sdk.SpeechTranslationConfig.fromSubscription("YourSubscriptionKey", "YourServiceRegion");
39+
40+
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
41+
const speechTranslationConfig = sdk.SpeechTranslationConfig.fromSubscription(process.env.SPEECH_KEY, process.env.SPEECH_REGION);
3642
speechTranslationConfig.speechRecognitionLanguage = "en-US";
3743
3844
var language = "it";
@@ -69,9 +75,6 @@ Follow these steps to create a Node.js console application for speech recognitio
6975
fromFile();
7076
```
7177

72-
1. In `SpeechTranslation.js`, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
73-
> [!IMPORTANT]
74-
> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../../use-key-vault.md). See the Cognitive Services [security](../../../../cognitive-services-security.md) article for more information.
7578
1. In `SpeechTranslation.js`, replace `YourAudioFile.wav` with your own WAV file. This example only recognizes speech from a WAV file. For information about other audio formats, see [How to use compressed input audio](~/articles/cognitive-services/speech-service/how-to-use-codec-compressed-audio-input-streams.md). This example supports up to 30 seconds audio.
7679
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md#speech-to-text). Specify the full locale with a dash (`-`) separator. For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
7780
1. To change the translation target language, replace `it` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md#speech-translation). With few exceptions you only specify the language code that precedes the locale dash (`-`) separator. For example, use `es` for Spanish (Spain) instead of `es-ES`. The default language is `en` if you don't specify a language.

articles/cognitive-services/Speech-Service/includes/quickstarts/speech-translation-basics/python.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,10 @@ The Speech SDK for Python is available as a [Python Package Index (PyPI) module]
2222

2323
Install a version of [Python from 3.7 to 3.10](https://www.python.org/downloads/). First check the [SDK installation guide](../../../quickstarts/setup-platform.md?pivots=programming-language-python) for any more requirements
2424

25+
### Set environment variables
26+
27+
[!INCLUDE [Environment variables](../../common/environment-variables.md)]
28+
2529
## Translate speech from a microphone
2630

2731
Follow these steps to create a new console application.
@@ -37,7 +41,8 @@ Follow these steps to create a new console application.
3741
import azure.cognitiveservices.speech as speechsdk
3842
3943
def recognize_from_microphone():
40-
speech_translation_config = speechsdk.translation.SpeechTranslationConfig(subscription="YourSubscriptionKey", region="YourServiceRegion")
44+
# This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
45+
speech_translation_config = speechsdk.SpeechTranslationConfig(subscription=os.environ.get('SPEECH_KEY'), region=os.environ.get('SPEECH_REGION'))
4146
speech_translation_config.speech_recognition_language="en-US"
4247
4348
target_language="it"
@@ -66,9 +71,6 @@ Follow these steps to create a new console application.
6671
recognize_from_microphone()
6772
```
6873

69-
1. In `speech_translation.py`, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
70-
> [!IMPORTANT]
71-
> Remember to remove the key from your code when you're done, and never post it publicly. For production, use a secure way of storing and accessing your credentials like [Azure Key Vault](../../../../use-key-vault.md). See the Cognitive Services [security](../../../../cognitive-services-security.md) article for more information.
7274
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md#speech-to-text). Specify the full locale with a dash (`-`) separator. For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
7375
1. To change the translation target language, replace `it` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md#speech-translation). With few exceptions you only specify the language code that precedes the locale dash (`-`) separator. For example, use `es` for Spanish (Spain) instead of `es-ES`. The default language is `en` if you don't specify a language.
7476

0 commit comments

Comments
 (0)