You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/includes/quickstarts/openai-speech/intro.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,11 +9,11 @@ ms.author: eur
9
9
> [!IMPORTANT]
10
10
> To complete the steps in this guide, access must be granted to Microsoft Azure OpenAI Service in the desired Azure subscription. Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at [https://aka.ms/oai/access](https://aka.ms/oai/access).
11
11
12
-
In this how-to guide, you can use Speech to converse with Azure OpenAI. The text recognized by the Speech service is sent to Azure OpenAI. The text response from Azure OpenAI is then synthesized by the Speech service.
12
+
In this how-to guide, you can use [Speech](overview.md) to converse with [Azure OpenAI](/azure/cognitive-services/openai/overview). The text recognized by the Speech service is sent to Azure OpenAI. The text response from Azure OpenAI is then synthesized by the Speech service.
13
13
14
14
Speak into the microphone to start a conversation with Azure OpenAI.
15
15
- Azure Cognitive Services Speech recognizes your speech and converts it into text (speech-to-text).
16
-
- Your request as text is sent to the Azure OpenAI service.
16
+
- Your request as text is sent to Azure OpenAI.
17
17
- Azure Cognitive Services Speech synthesizes (text-to-speech) the response from Azure OpenAI to the default speaker.
18
18
19
19
Although the experience of this example is a back-and-forth exchange, Azure OpenAI doesn't remember the context of your conversation.
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/includes/quickstarts/openai-speech/python.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,7 +60,7 @@ Follow these steps to create a new console application.
60
60
openai.api_type = 'azure'
61
61
openai.api_version = '2022-12-01'
62
62
63
-
#This will correspond to the custom name you chose for your deployment when you deployed a model.
63
+
#This will correspond to the custom name you chose for your deployment when you deployed a model.
64
64
deployment_id='text-davinci-002'
65
65
66
66
# This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
@@ -139,9 +139,9 @@ python openai-speech.py
139
139
```
140
140
141
141
> [!IMPORTANT]
142
-
> Make sure that you set the `OPEN_AI_KEY`, `OPEN_AI_ENDPOINT`, `SPEECH__KEY` and `SPEECH__REGION` environment variables as described [above](#set-environment-variables). If you don't set these variables, the sample will fail with an error message.
142
+
> Make sure that you set the `OPEN_AI_KEY`, `OPEN_AI_ENDPOINT`, `SPEECH__KEY` and `SPEECH__REGION` environment variables as described [previously](#set-environment-variables). If you don't set these variables, the sample will fail with an error message.
143
143
144
-
Speak into your microphone when prompted. The console output will include the prompt for you to begin speaking, then your request as text, and then the Azure OpenAI response as text. The Azure OpenAI response should be converted from text to speech and output to the default speaker.
144
+
Speak into your microphone when prompted. The console output includes the prompt for you to begin speaking, then your request as text, and then the response from Azure OpenAI as text. The response from Azure OpenAI should be converted from text to speech and then output to the default speaker.
Now that you've completed the quickstart, here are some additional considerations:
162
+
Now that you've completed the quickstart, here are some more considerations:
163
163
164
164
- To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
165
-
- To change the voice that you hear, replace `en-US-JennyMultilingualNeural` with another [supported voice](~/articles/cognitive-services/speech-service/supported-languages.md#prebuilt-neural-voices). If the voice does not speak the language of the text returned from Azure OpenAI, the Speech service won't output synthesized audio.
166
-
- To use a different [deployed](/azure/cognitive-services/openai/how-to/create-resource#deploy-a-model) Azure OpenAI model, replace `text-davinci-002` with another [model](/azure/cognitive-services/openai/concepts/models#model-summary-table-and-region-availability). For example, `text-davinci-003` for the latest version of the Davinci model.
165
+
- To change the voice that you hear, replace `en-US-JennyMultilingualNeural` with another [supported voice](~/articles/cognitive-services/speech-service/supported-languages.md#prebuilt-neural-voices). If the voice doesn't speak the language of the text returned from Azure OpenAI, the Speech service doesn't output synthesized audio.
166
+
- To use a different [model](/azure/cognitive-services/openai/concepts/models#model-summary-table-and-region-availability), replace `text-davinci-002` with the ID of another [deployment](/azure/cognitive-services/openai/how-to/create-resource#deploy-a-model). Keep in mind that the deployment ID isn't necessarily the same as the model name. You named your deployment when you created it in [Azure OpenAI Studio](https://oai.azure.com/).
167
167
- Azure OpenAI also performs content moderation on the prompt inputs and generated outputs. The prompts or responses may be filtered if harmful content is detected. For more information, see the [content filtering](/azure/cognitive-services/openai/concepts/content-filter) article.
0 commit comments