You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/common/environment-variables-openai.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,16 +2,17 @@
2
2
author: eric-urban
3
3
ms.service: azure-ai-speech
4
4
ms.topic: include
5
-
ms.date: 02/28/2023
5
+
ms.date: 02/08/2024
6
6
ms.author: eur
7
7
---
8
8
9
-
Your application must be authenticated to access Azure AI services resources. For production, use a secure way of storing and accessing your credentials. For example, after you [get a key](~/articles/ai-services/multi-service-resource.md?pivots=azportal#get-the-keys-for-your-resource) for your <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices"title="Create a Speech resource"target="_blank">Speech resource</a>, write it to a new environment variable on the local machine running the application.
9
+
Your application must be authenticated to access Azure AI services resources. For production, use a secure way of storing and accessing your credentials. For example, after you [get a key](~/articles/ai-services/multi-service-resource.md?pivots=azportal#get-the-keys-for-your-resource) for your Speech resource, write it to a new environment variable on the local machine running the application.
10
10
11
11
> [!TIP]
12
-
> Don't include the key directly in your code, and never post it publicly. See the Azure AI services [security](../../../security-features.md) article for more authentication options like [Azure Key Vault](../../../use-key-vault.md).
12
+
> Don't include the key directly in your code, and never post it publicly. See [Azure AI services security](../../../security-features.md) for more authentication options like [Azure Key Vault](../../../use-key-vault.md).
13
+
14
+
To set the environment variables, open a console window, and follow the instructions for your operating system and development environment.
13
15
14
-
To set the environment variables, open a console window, and follow the instructions for your operating system and development environment.
15
16
- To set the `OPEN_AI_KEY` environment variable, replace `your-openai-key` with one of the keys for your resource.
16
17
- To set the `OPEN_AI_ENDPOINT` environment variable, replace `your-openai-endpoint` with one of the regions for your resource.
17
18
- To set the `OPEN_AI_DEPLOYMENT_NAME` environment variable, replace `your-openai-deployment-name` with one of the regions for your resource.
@@ -23,15 +24,15 @@ To set the environment variables, open a console window, and follow the instruct
> If you only need to access the environment variable in the current running console, you can set the environment variable with `set` instead of `setx`.
33
+
> If you only need to access the environment variable in the current running console, set the environment variable with `set` instead of `setx`.
33
34
34
-
After you add the environment variables, you may need to restart any running programs that will need to read the environment variable, including the console window. For example, if you are using Visual Studio as your editor, restart Visual Studio before running the example.
35
+
After you add the environment variables, you might need to restart any running programs that need to read the environment variable, including the console window. For example, if Visual Studio is your editor, restart Visual Studio before running the example.
After you add the environment variables, run `source ~/.bashrc` from your console window to make the changes effective.
47
48
48
49
#### [macOS](#tab/macos)
49
-
50
50
##### Bash
51
51
52
-
Edit your .bash_profile, and add the environment variables:
52
+
Edit your *.bash_profile*, and add the environment variables:
53
53
54
54
```bash
55
55
export OPEN_AI_KEY=your-openai-key
@@ -63,11 +63,11 @@ After you add the environment variables, run `source ~/.bash_profile` from your
63
63
64
64
##### Xcode
65
65
66
-
For iOS and macOS development, you set the environment variables in Xcode. For example, follow these steps to set the environment variable in Xcode 13.4.1.
66
+
For iOS and macOS development, set the environment variables in Xcode. For example, follow these steps to set the environment variable in Xcode 13.4.1.
The Speech SDK is available as a [NuGet package](https://www.nuget.org/packages/Microsoft.CognitiveServices.Speech) and implements .NET Standard 2.0. You install the Speech SDK later in this guide, but first check the [SDK installation guide](../../../quickstarts/setup-platform.md?pivots=programming-language-csharp) for any more requirements.
19
20
20
21
### Set environment variables
@@ -27,19 +28,27 @@ This example requires environment variables named `OPEN_AI_KEY`, `OPEN_AI_ENDPOI
27
28
28
29
Follow these steps to create a new console application.
29
30
30
-
1. Open a command prompt where you want the new project, and create a console application with the .NET CLI. The `Program.cs` file should be created in the project directory.
31
+
1. Open a command prompt window in the folder where you want the new project. Run this command to create a console application with the .NET CLI.
32
+
31
33
```dotnetcli
32
34
dotnet new console
33
35
```
36
+
37
+
The command creates a *Program.cs* file in the project directory.
38
+
34
39
1. Install the Speech SDK in your new project with the .NET CLI.
1. Install the Azure OpenAI SDK (prerelease) in your new project with the .NET CLI.
46
+
39
47
```dotnetcli
40
48
dotnet add package Azure.AI.OpenAI --prerelease
41
49
```
42
-
1. Replace the contents of `Program.cs` with the following code.
50
+
51
+
1. Replace the contents of `Program.cs` with the following code.
43
52
44
53
```csharp
45
54
using System.Text;
@@ -189,14 +198,14 @@ Follow these steps to create a new console application.
189
198
190
199
1. To increase or decrease the number of tokens returned by Azure OpenAI, change the `MaxTokens` property in the `ChatCompletionsOptions` class instance. For more information tokens and cost implications, see [Azure OpenAI tokens](/azure/ai-services/openai/overview#tokens) and [Azure OpenAI pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/).
191
200
192
-
Run your new console application to start speech recognition from a microphone:
201
+
1. Run your new console application to start speech recognition from a microphone:
193
202
194
-
```console
195
-
dotnet run
196
-
```
203
+
```console
204
+
dotnet run
205
+
```
197
206
198
207
> [!IMPORTANT]
199
-
> Make sure that you set the `OPEN_AI_KEY`, `OPEN_AI_ENDPOINT`, `OPEN_AI_DEPLOYMENT_NAME`, `SPEECH_KEY` and `SPEECH_REGION` environment variables as described [previously](#set-environment-variables). If you don't set these variables, the sample will fail with an error message.
208
+
> Make sure that you set the `OPEN_AI_KEY`, `OPEN_AI_ENDPOINT`, `OPEN_AI_DEPLOYMENT_NAME`, `SPEECH_KEY` and `SPEECH_REGION` [environment variables](#set-environment-variables) as described. If you don't set these variables, the sample will fail with an error message.
200
209
201
210
Speak into your microphone when prompted. The console output includes the prompt for you to begin speaking, then your request as text, and then the response from Azure OpenAI as text. The response from Azure OpenAI should be converted from text to speech and then output to the default speaker.
202
211
@@ -216,12 +225,13 @@ PS C:\dev\openai\csharp>
216
225
```
217
226
218
227
## Remarks
219
-
Now that you've completed the quickstart, here are some more considerations:
220
228
221
-
- To change the speech recognition language, replace `en-US` with another [supported language](~/articles/ai-services/speech-service/language-support.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/ai-services/speech-service/language-identification.md).
229
+
Here are some more considerations:
230
+
231
+
- To change the speech recognition language, replace `en-US` with another [supported language](~/articles/ai-services/speech-service/language-support.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US`. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/ai-services/speech-service/language-identification.md).
222
232
- To change the voice that you hear, replace `en-US-JennyMultilingualNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#prebuilt-neural-voices). If the voice doesn't speak the language of the text returned from Azure OpenAI, the Speech service doesn't output synthesized audio.
223
-
- To use a different [model](/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability), replace `gpt-35-turbo-instruct` with the ID of another [deployment](/azure/ai-services/openai/how-to/create-resource?pivots=web-portal#deploy-a-model). Keep in mind that the deployment ID isn't necessarily the same as the model name. You named your deployment when you created it in [Azure OpenAI Studio](https://oai.azure.com/).
224
-
- Azure OpenAI also performs content moderation on the prompt inputs and generated outputs. The prompts or responses may be filtered if harmful content is detected. For more information, see the [content filtering](/azure/ai-services/openai/concepts/content-filter) article.
233
+
- To use a different [model](/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability), replace `gpt-35-turbo-instruct` with the ID of another [deployment](/azure/ai-services/openai/how-to/create-resource?pivots=web-portal#deploy-a-model). The deployment ID isn't necessarily the same as the model name. You named your deployment when you created it in [Azure OpenAI Studio](https://oai.azure.com/).
234
+
- Azure OpenAI also performs content moderation on the prompt inputs and generated outputs. The prompts or responses might be filtered if harmful content is detected. For more information, see the [content filtering](/azure/ai-services/openai/concepts/content-filter) article.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/quickstarts/openai-speech/intro.md
+7-6Lines changed: 7 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,18 +2,19 @@
2
2
author: eric-urban
3
3
ms.service: azure-ai-speech
4
4
ms.topic: include
5
-
ms.date: 02/28/2023
5
+
ms.date: 02/08/2024
6
6
ms.author: eur
7
7
---
8
8
9
-
> [!IMPORTANT]
10
-
> To complete the steps in this guide, access must be granted to Microsoft Azure OpenAI Service in the desired Azure subscription. Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at [https://aka.ms/oai/access](https://aka.ms/oai/access).
11
-
12
-
In this how-to guide, you can use [Azure AI Speech](../../../overview.md) to converse with [Azure OpenAI Service](/azure/ai-services/openai/overview). The text recognized by the Speech service is sent to Azure OpenAI. The text response from Azure OpenAI is then synthesized by the Speech service.
9
+
In this how-to guide, you can use Azure AI Speech to converse with Azure OpenAI Service. The text recognized by the Speech service is sent to Azure OpenAI. The Speech service synthesizes speech from the text response from Azure OpenAI.
13
10
14
11
Speak into the microphone to start a conversation with Azure OpenAI.
12
+
15
13
- The Speech service recognizes your speech and converts it into text (speech to text).
16
14
- Your request as text is sent to Azure OpenAI.
17
-
- The Speech service text to speech (TTS) feature synthesizes the response from Azure OpenAI to the default speaker.
15
+
- The Speech service text to speech feature synthesizes the response from Azure OpenAI to the default speaker.
18
16
19
17
Although the experience of this example is a back-and-forth exchange, Azure OpenAI doesn't remember the context of your conversation.
18
+
19
+
> [!IMPORTANT]
20
+
> To complete the steps in this guide, you must have access to Microsoft Azure OpenAI Service in your Azure subscription. Currently, access to this service is granted only by application. Apply for access to Azure OpenAI by completing the form at [https://aka.ms/oai/access](https://aka.ms/oai/access).
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/quickstarts/openai-speech/python.md
+22-15Lines changed: 22 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
author: eric-urban
3
3
ms.service: azure-ai-speech
4
4
ms.topic: include
5
-
ms.date: 03/07/2023
5
+
ms.date: 02/08/2024
6
6
ms.author: eur
7
7
---
8
8
@@ -16,13 +16,14 @@ ms.author: eur
16
16
17
17
## Set up the environment
18
18
19
-
The Speech SDK for Python is available as a [Python Package Index (PyPI) module](https://pypi.org/project/azure-cognitiveservices-speech/). The Speech SDK for Python is compatible with Windows, Linux, and macOS.
20
-
- You must install the [Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017, 2019, and 2022](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) for your platform. Installing this package for the first time might require a restart.
19
+
The Speech SDK for Python is available as a [Python Package Index (PyPI) module](https://pypi.org/project/azure-cognitiveservices-speech/). The Speech SDK for Python is compatible with Windows, Linux, and macOS.
20
+
21
+
- Install the [Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017, 2019, and 2022](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) for your platform. Installing this package for the first time might require a restart.
21
22
- On Linux, you must use the x64 target architecture.
22
23
23
-
Install a version of [Python from 3.7 or later](https://www.python.org/downloads/). First check the [SDK installation guide](../../../quickstarts/setup-platform.md?pivots=programming-language-python) for any more requirements.
24
+
Install a version of [Python from 3.7 or later](https://www.python.org/downloads/). First check the [SDK installation guide](../../../quickstarts/setup-platform.md?pivots=programming-language-python) for any more requirements.
24
25
25
-
Install the following Python libraries: `os`, `requests`, `json`
26
+
Install the following Python libraries: `os`, `requests`, `json`.
26
27
27
28
### Set environment variables
28
29
@@ -34,19 +35,24 @@ This example requires environment variables named `OPEN_AI_KEY`, `OPEN_AI_ENDPOI
34
35
35
36
Follow these steps to create a new console application.
36
37
37
-
1. Open a command prompt where you want the new project, and create a new file named `openai-speech.py`.
38
+
1. Open a command prompt window in the folder where you want the new project. Open a command prompt where you want the new project, and create a new file named `openai-speech.py`.
39
+
38
40
1. Run this command to install the Speech SDK:
41
+
39
42
```console
40
43
pip install azure-cognitiveservices-speech
41
44
```
45
+
42
46
1. Run this command to install the OpenAI SDK:
47
+
43
48
```console
44
49
pip install openai
45
50
```
51
+
46
52
> [!NOTE]
47
-
> This library is maintained by OpenAI (not Microsoft Azure). Refer to the [release history](https://github.com/openai/openai-python/releases) or the [version.py commit history](https://github.com/openai/openai-python/commits/main/openai/version.py) to track the latest updates to the library.
53
+
> This library is maintained by OpenAI, not Microsoft Azure. Refer to the [release history](https://github.com/openai/openai-python/releases) or the [version.py commit history](https://github.com/openai/openai-python/commits/main/openai/version.py) to track the latest updates to the library.
48
54
49
-
1. Copy the following code into `openai-speech.py`:
55
+
1. Create a file named *openai-speech.py*. Copy the following code into that file:
50
56
51
57
```Python
52
58
import os
@@ -139,11 +145,11 @@ Follow these steps to create a new console application.
139
145
140
146
1. To increase or decrease the number of tokens returned by Azure OpenAI, change the `max_tokens` parameter. For more information tokens and cost implications, see [Azure OpenAI tokens](/azure/ai-services/openai/overview#tokens) and [Azure OpenAI pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/).
141
147
142
-
Run your new console application to start speech recognition from a microphone:
148
+
1. Run your new console application to start speech recognition from a microphone:
143
149
144
-
```console
145
-
python openai-speech.py
146
-
```
150
+
```console
151
+
python openai-speech.py
152
+
```
147
153
148
154
> [!IMPORTANT]
149
155
> Make sure that you set the `OPEN_AI_KEY`, `OPEN_AI_ENDPOINT`, `OPEN_AI_DEPLOYMENT_NAME`, `SPEECH_KEY` and `SPEECH_REGION` environment variables as described [previously](#set-environment-variables). If you don't set these variables, the sample will fail with an error message.
@@ -166,12 +172,13 @@ PS C:\dev\openai\python>
166
172
```
167
173
168
174
## Remarks
169
-
Now that you've completed the quickstart, here are some more considerations:
170
175
171
-
- To change the speech recognition language, replace `en-US` with another [supported language](~/articles/ai-services/speech-service/language-support.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/ai-services/speech-service/language-identification.md).
176
+
Here are some more considerations:
177
+
178
+
- To change the speech recognition language, replace `en-US` with another [supported language](~/articles/ai-services/speech-service/language-support.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US`. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/ai-services/speech-service/language-identification.md).
172
179
- To change the voice that you hear, replace `en-US-JennyMultilingualNeural` with another [supported voice](~/articles/ai-services/speech-service/language-support.md#prebuilt-neural-voices). If the voice doesn't speak the language of the text returned from Azure OpenAI, the Speech service doesn't output synthesized audio.
173
180
- To use a different [model](/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability), replace `gpt-35-turbo-instruct` with the ID of another [deployment](/azure/ai-services/openai/how-to/create-resource#deploy-a-model). Keep in mind that the deployment ID isn't necessarily the same as the model name. You named your deployment when you created it in [Azure OpenAI Studio](https://oai.azure.com/).
174
-
- Azure OpenAI also performs content moderation on the prompt inputs and generated outputs. The prompts or responses may be filtered if harmful content is detected. For more information, see the [content filtering](/azure/ai-services/openai/concepts/content-filter) article.
181
+
- Azure OpenAI also performs content moderation on the prompt inputs and generated outputs. The prompts or responses might be filtered if harmful content is detected. For more information, see the [content filtering](/azure/ai-services/openai/concepts/content-filter) article.
0 commit comments