Skip to content

Commit 76c3cc9

Browse files
authored
Merge pull request #161 from eric-urban/eur/speech-toc
Azure OpenAI terminology
2 parents 6963d41 + a947776 commit 76c3cc9

File tree

8 files changed

+68
-70
lines changed

8 files changed

+68
-70
lines changed

articles/ai-services/speech-service/includes/common/environment-variables-clu.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@ ms.date: 8/11/2024
66
ms.author: eur
77
---
88

9-
Your application must be authenticated to access Azure AI services resources. For production, use a secure way of storing and accessing your credentials. For example, after you get a key for your <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a>, write it to a new environment variable on the local machine running the application.
9+
Your application must be authenticated to access Azure AI services resources. This article shows you how to use environment variables to store your credentials. You can then access the environment variables from your code to authenticate your application. For production, use a more secure way to store and access your credentials.
1010

11-
[!INCLUDE [Azure key vault](~/reusable-content/ce-skilling/azure/includes/ai-services/security/azure-key-vault.md)]
11+
[!INCLUDE [Azure key vault](~/reusable-content/ce-skilling/azure/includes/ai-services/security/microsoft-entra-id-akv.md)]
1212

1313
To set the environment variables, open a console window, and follow the instructions for your operating system and development environment.
1414
- To set the `LANGUAGE_KEY` environment variable, replace `your-language-key` with one of the keys for your resource.

articles/ai-services/speech-service/includes/common/environment-variables-openai.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -6,24 +6,24 @@ ms.date: 8/11/2024
66
ms.author: eur
77
---
88

9-
Your application must be authenticated to access Azure AI services resources. For production, use a secure way of storing and accessing your credentials. For example, after you get a key for your Speech resource, write it to a new environment variable on the local machine running the application.
9+
Your application must be authenticated to access Azure AI services resources. This article shows you how to use environment variables to store your credentials. You can then access the environment variables from your code to authenticate your application. For production, use a more secure way to store and access your credentials.
1010

11-
[!INCLUDE [Azure key vault](~/reusable-content/ce-skilling/azure/includes/ai-services/security/azure-key-vault.md)]
11+
[!INCLUDE [Azure key vault](~/reusable-content/ce-skilling/azure/includes/ai-services/security/microsoft-entra-id-akv.md)]
1212

1313
To set the environment variables, open a console window, and follow the instructions for your operating system and development environment.
1414

15-
- To set the `OPEN_AI_KEY` environment variable, replace `your-openai-key` with one of the keys for your resource.
16-
- To set the `OPEN_AI_ENDPOINT` environment variable, replace `your-openai-endpoint` with one of the regions for your resource.
17-
- To set the `OPEN_AI_DEPLOYMENT_NAME` environment variable, replace `your-openai-deployment-name` with one of the regions for your resource.
15+
- To set the `AZURE_OPENAI_API_KEY` environment variable, replace `your-openai-key` with one of the keys for your resource.
16+
- To set the `AZURE_OPENAI_ENDPOINT` environment variable, replace `your-openai-endpoint` with one of the regions for your resource.
17+
- To set the `AZURE_OPENAI_CHAT_DEPLOYMENT` environment variable, replace `your-openai-deployment-name` with one of the regions for your resource.
1818
- To set the `SPEECH_KEY` environment variable, replace `your-speech-key` with one of the keys for your resource.
1919
- To set the `SPEECH_REGION` environment variable, replace `your-speech-region` with one of the regions for your resource.
2020

2121
#### [Windows](#tab/windows)
2222

2323
```console
24-
setx OPEN_AI_KEY your-openai-key
25-
setx OPEN_AI_ENDPOINT your-openai-endpoint
26-
setx OPEN_AI_DEPLOYMENT_NAME your-openai-deployment-name
24+
setx AZURE_OPENAI_API_KEY your-openai-key
25+
setx AZURE_OPENAI_ENDPOINT your-openai-endpoint
26+
setx AZURE_OPENAI_CHAT_DEPLOYMENT your-openai-deployment-name
2727
setx SPEECH_KEY your-speech-key
2828
setx SPEECH_REGION your-speech-region
2929
```
@@ -36,9 +36,9 @@ After you add the environment variables, you might need to restart any running p
3636
#### [Linux](#tab/linux)
3737

3838
```bash
39-
export OPEN_AI_KEY=your-openai-key
40-
export OPEN_AI_ENDPOINT=your-openai-endpoint
41-
export OPEN_AI_DEPLOYMENT_NAME=your-openai-deployment-name
39+
export AZURE_OPENAI_API_KEY=your-openai-key
40+
export AZURE_OPENAI_ENDPOINT=your-openai-endpoint
41+
export AZURE_OPENAI_CHAT_DEPLOYMENT=your-openai-deployment-name
4242
export SPEECH_KEY=your-speech-key
4343
export SPEECH_REGION=your-speech-region
4444
```
@@ -51,9 +51,9 @@ After you add the environment variables, run `source ~/.bashrc` from your consol
5151
Edit your *.bash_profile*, and add the environment variables:
5252

5353
```bash
54-
export OPEN_AI_KEY=your-openai-key
55-
export OPEN_AI_ENDPOINT=your-openai-endpoint
56-
export OPEN_AI_DEPLOYMENT_NAME=your-openai-deployment-name
54+
export AZURE_OPENAI_API_KEY=your-openai-key
55+
export AZURE_OPENAI_ENDPOINT=your-openai-endpoint
56+
export AZURE_OPENAI_CHAT_DEPLOYMENT=your-openai-deployment-name # For example, "gpt-4o-mini"
5757
export SPEECH_KEY=your-speech-key
5858
export SPEECH_REGION=your-speech-region
5959
```

articles/ai-services/speech-service/includes/common/environment-variables.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@ ms.date: 8/11/2024
66
ms.author: eur
77
---
88

9-
You need to authenticate your application to access Azure AI services. For production, use a secure way to store and access your credentials. For example, after you get a key for your Speech resource, write it to a new environment variable on the local machine that runs the application.
9+
You need to authenticate your application to access Azure AI services. This article shows you how to use environment variables to store your credentials. You can then access the environment variables from your code to authenticate your application. For production, use a more secure way to store and access your credentials.
1010

11-
[!INCLUDE [Azure key vault](~/reusable-content/ce-skilling/azure/includes/ai-services/security/azure-key-vault.md)]
11+
[!INCLUDE [Azure key vault](~/reusable-content/ce-skilling/azure/includes/ai-services/security/microsoft-entra-id-akv.md)]
1212

1313
To set the environment variables for your Speech resource key and region, open a console window, and follow the instructions for your operating system and development environment.
1414

articles/ai-services/speech-service/includes/quickstarts/openai-speech/csharp.md

Lines changed: 14 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 02/08/2024
5+
ms.date: 9/5/2024
66
ms.author: eur
77
---
88

@@ -20,7 +20,7 @@ The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/
2020

2121
### Set environment variables
2222

23-
This example requires environment variables named `OPEN_AI_KEY`, `OPEN_AI_ENDPOINT`, `OPEN_AI_DEPLOYMENT_NAME`, `SPEECH_KEY`, and `SPEECH_REGION`.
23+
This example requires environment variables named `AZURE_OPENAI_API_KEY`, `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_CHAT_DEPLOYMENT`, `SPEECH_KEY`, and `SPEECH_REGION`.
2424

2525
[!INCLUDE [Environment variables](../../common/environment-variables-openai.md)]
2626

@@ -57,16 +57,16 @@ Follow these steps to create a new console application.
5757
using Azure;
5858
using Azure.AI.OpenAI;
5959
60-
// This example requires environment variables named "OPEN_AI_KEY", "OPEN_AI_ENDPOINT" and "OPEN_AI_DEPLOYMENT_NAME"
60+
// This example requires environment variables named "AZURE_OPENAI_API_KEY", "AZURE_OPENAI_ENDPOINT" and "AZURE_OPENAI_CHAT_DEPLOYMENT"
6161
// Your endpoint should look like the following https://YOUR_OPEN_AI_RESOURCE_NAME.openai.azure.com/
62-
string openAIKey = Environment.GetEnvironmentVariable("OPEN_AI_KEY") ??
63-
throw new ArgumentException("Missing OPEN_AI_KEY");
64-
string openAIEndpoint = Environment.GetEnvironmentVariable("OPEN_AI_ENDPOINT") ??
65-
throw new ArgumentException("Missing OPEN_AI_ENDPOINT");
62+
string openAIKey = Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY") ??
63+
throw new ArgumentException("Missing AZURE_OPENAI_API_KEY");
64+
string openAIEndpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT") ??
65+
throw new ArgumentException("Missing AZURE_OPENAI_ENDPOINT");
6666
6767
// Enter the deployment name you chose when you deployed the model.
68-
string engine = Environment.GetEnvironmentVariable("OPEN_AI_DEPLOYMENT_NAME") ??
69-
throw new ArgumentException("Missing OPEN_AI_DEPLOYMENT_NAME");
68+
string engine = Environment.GetEnvironmentVariable("AZURE_OPENAI_CHAT_DEPLOYMENT") ??
69+
throw new ArgumentException("Missing AZURE_OPENAI_CHAT_DEPLOYMENT");
7070
7171
// This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
7272
string speechKey = Environment.GetEnvironmentVariable("SPEECH_KEY") ??
@@ -79,15 +79,15 @@ Follow these steps to create a new console application.
7979
8080
try
8181
{
82-
await ChatWithOpenAI();
82+
await ChatWithAzureOpenAI();
8383
}
8484
catch (Exception ex)
8585
{
8686
Console.WriteLine(ex);
8787
}
8888
8989
// Prompts Azure OpenAI with a request and synthesizes the response.
90-
async Task AskOpenAI(string prompt)
90+
async Task AskAzureOpenAI(string prompt)
9191
{
9292
object consoleLock = new();
9393
var speechConfig = SpeechConfig.FromSubscription(speechKey, speechRegion);
@@ -147,7 +147,7 @@ Follow these steps to create a new console application.
147147
}
148148
149149
// Continuously listens for speech input to recognize and send as text to Azure OpenAI
150-
async Task ChatWithOpenAI()
150+
async Task ChatWithAzureOpenAI()
151151
{
152152
// Should be the locale for the speaker's language.
153153
var speechConfig = SpeechConfig.FromSubscription(speechKey, speechRegion);
@@ -175,7 +175,7 @@ Follow these steps to create a new console application.
175175
else
176176
{
177177
Console.WriteLine($"Recognized speech: {speechRecognitionResult.Text}");
178-
await AskOpenAI(speechRecognitionResult.Text);
178+
await AskAzureOpenAI(speechRecognitionResult.Text);
179179
}
180180
181181
break;
@@ -205,7 +205,7 @@ Follow these steps to create a new console application.
205205
```
206206
207207
> [!IMPORTANT]
208-
> Make sure that you set the `OPEN_AI_KEY`, `OPEN_AI_ENDPOINT`, `OPEN_AI_DEPLOYMENT_NAME`, `SPEECH_KEY` and `SPEECH_REGION` [environment variables](#set-environment-variables) as described. If you don't set these variables, the sample will fail with an error message.
208+
> Make sure that you set the `AZURE_OPENAI_API_KEY`, `AZURE_OPENAI_ENDPOINT`, `AZURE_OPENAI_CHAT_DEPLOYMENT`, `SPEECH_KEY` and `SPEECH_REGION` [environment variables](#set-environment-variables) as described. If you don't set these variables, the sample will fail with an error message.
209209
210210
Speak into your microphone when prompted. The console output includes the prompt for you to begin speaking, then your request as text, and then the response from Azure OpenAI as text. The response from Azure OpenAI should be converted from text to speech and then output to the default speaker.
211211
@@ -232,7 +232,6 @@ Here are some more considerations:
232232
- To change the voice that you hear, replace `en-US-JennyMultilingualNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#prebuilt-neural-voices). If the voice doesn't speak the language of the text returned from Azure OpenAI, the Speech service doesn't output synthesized audio.
233233
- To reduce latency for text to speech output, use the text streaming feature, which enables real-time text processing for fast audio generation and minimizes latency, enhancing the fluidity and responsiveness of real-time audio outputs. Refer to [how to use text streaming](~/articles/ai-services/speech-service/how-to-lower-speech-synthesis-latency.md#input-text-streaming).
234234
- To enable [TTS Avatar](~/articles/ai-services/speech-service/text-to-speech-avatar/what-is-text-to-speech-avatar.md) as a visual experience of speech output, refer to [real-time synthesis for text to speech avatar](~/articles/ai-services/speech-service/text-to-speech-avatar/real-time-synthesis-avatar.md) and [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/js/browser/avatar#chat-sample) for chat scenario with avatar.
235-
- To use a different [model](/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability), replace `gpt-35-turbo-instruct` with the ID of another [deployment](/azure/ai-services/openai/how-to/create-resource?pivots=web-portal#deploy-a-model). The deployment ID isn't necessarily the same as the model name. You named your deployment when you created it in [Azure OpenAI Studio](https://oai.azure.com/).
236235
- Azure OpenAI also performs content moderation on the prompt inputs and generated outputs. The prompts or responses might be filtered if harmful content is detected. For more information, see the [content filtering](/azure/ai-services/openai/concepts/content-filter) article.
237236

238237
## Clean up resources

articles/ai-services/speech-service/includes/quickstarts/openai-speech/intro.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 02/08/2024
5+
ms.date: 9/5/2024
66
ms.author: eur
77
---
88

0 commit comments

Comments
 (0)