Skip to content

Commit ae204b9

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into apicvid
2 parents 7a61b34 + fec73d8 commit ae204b9

File tree

243 files changed

+2067
-1633
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

243 files changed

+2067
-1633
lines changed

articles/ai-services/speech-service/embedded-speech.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,9 @@ zone_pivot_groups: programming-languages-set-thirteen
1414

1515
# Embedded Speech
1616

17+
> [!CAUTION]
18+
> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
19+
1720
Embedded Speech is designed for on-device [speech to text](speech-to-text.md) and [text to speech](text-to-speech.md) scenarios where cloud connectivity is intermittent or unavailable. For example, you can use embedded speech in industrial equipment, a voice enabled air conditioning unit, or a car that might travel out of range. You can also develop hybrid cloud and offline solutions. For scenarios where your devices must be in a secure environment like a bank or government entity, you should first consider [disconnected containers](../containers/disconnected-containers.md).
1821

1922
> [!IMPORTANT]

articles/ai-services/speech-service/how-to-configure-openssl-linux.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,9 @@ zone_pivot_groups: programming-languages-set-three
1414

1515
# Configure OpenSSL for Linux
1616

17+
> [!CAUTION]
18+
> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
19+
1720
With the Speech SDK, [OpenSSL](https://www.openssl.org) is dynamically configured to the host-system version.
1821

1922
> [!NOTE]

articles/ai-services/speech-service/how-to-configure-rhel-centos-7.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,9 @@ ms.author: pankopon
1212

1313
# Configure RHEL/CentOS 7
1414

15+
> [!CAUTION]
16+
> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
17+
1518
To use the Speech SDK on Red Hat Enterprise Linux (RHEL) 7 x64 and CentOS 7 x64, update the C++ compiler (for C++ development) and the shared C++ runtime library on your system.
1619

1720
## Install dependencies

articles/ai-services/speech-service/how-to-pronunciation-assessment.md

Lines changed: 454 additions & 459 deletions
Large diffs are not rendered by default.

articles/ai-services/speech-service/includes/common/environment-variables-openai.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -2,16 +2,17 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 02/28/2023
5+
ms.date: 02/08/2024
66
ms.author: eur
77
---
88

9-
Your application must be authenticated to access Azure AI services resources. For production, use a secure way of storing and accessing your credentials. For example, after you [get a key](~/articles/ai-services/multi-service-resource.md?pivots=azportal#get-the-keys-for-your-resource) for your <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a>, write it to a new environment variable on the local machine running the application.
9+
Your application must be authenticated to access Azure AI services resources. For production, use a secure way of storing and accessing your credentials. For example, after you [get a key](~/articles/ai-services/multi-service-resource.md?pivots=azportal#get-the-keys-for-your-resource) for your Speech resource, write it to a new environment variable on the local machine running the application.
1010

1111
> [!TIP]
12-
> Don't include the key directly in your code, and never post it publicly. See the Azure AI services [security](../../../security-features.md) article for more authentication options like [Azure Key Vault](../../../use-key-vault.md).
12+
> Don't include the key directly in your code, and never post it publicly. See [Azure AI services security](../../../security-features.md) for more authentication options like [Azure Key Vault](../../../use-key-vault.md).
13+
14+
To set the environment variables, open a console window, and follow the instructions for your operating system and development environment.
1315

14-
To set the environment variables, open a console window, and follow the instructions for your operating system and development environment.
1516
- To set the `OPEN_AI_KEY` environment variable, replace `your-openai-key` with one of the keys for your resource.
1617
- To set the `OPEN_AI_ENDPOINT` environment variable, replace `your-openai-endpoint` with one of the regions for your resource.
1718
- To set the `OPEN_AI_DEPLOYMENT_NAME` environment variable, replace `your-openai-deployment-name` with one of the regions for your resource.
@@ -23,15 +24,15 @@ To set the environment variables, open a console window, and follow the instruct
2324
```console
2425
setx OPEN_AI_KEY your-openai-key
2526
setx OPEN_AI_ENDPOINT your-openai-endpoint
26-
setx OPEN_AI_DEPLOYMENT_NAME=your-openai-deployment-name
27+
setx OPEN_AI_DEPLOYMENT_NAME your-openai-deployment-name
2728
setx SPEECH_KEY your-speech-key
2829
setx SPEECH_REGION your-speech-region
2930
```
3031

3132
> [!NOTE]
32-
> If you only need to access the environment variable in the current running console, you can set the environment variable with `set` instead of `setx`.
33+
> If you only need to access the environment variable in the current running console, set the environment variable with `set` instead of `setx`.
3334
34-
After you add the environment variables, you may need to restart any running programs that will need to read the environment variable, including the console window. For example, if you are using Visual Studio as your editor, restart Visual Studio before running the example.
35+
After you add the environment variables, you might need to restart any running programs that need to read the environment variable, including the console window. For example, if Visual Studio is your editor, restart Visual Studio before running the example.
3536

3637
#### [Linux](#tab/linux)
3738

@@ -46,10 +47,9 @@ export SPEECH_REGION=your-speech-region
4647
After you add the environment variables, run `source ~/.bashrc` from your console window to make the changes effective.
4748

4849
#### [macOS](#tab/macos)
49-
5050
##### Bash
5151

52-
Edit your .bash_profile, and add the environment variables:
52+
Edit your *.bash_profile*, and add the environment variables:
5353

5454
```bash
5555
export OPEN_AI_KEY=your-openai-key
@@ -63,11 +63,11 @@ After you add the environment variables, run `source ~/.bash_profile` from your
6363

6464
##### Xcode
6565

66-
For iOS and macOS development, you set the environment variables in Xcode. For example, follow these steps to set the environment variable in Xcode 13.4.1.
66+
For iOS and macOS development, set the environment variables in Xcode. For example, follow these steps to set the environment variable in Xcode 13.4.1.
6767

68-
1. Select **Product** > **Scheme** > **Edit scheme**
69-
1. Select **Arguments** on the **Run** (Debug Run) page
70-
1. Under **Environment Variables** select the plus (+) sign to add a new environment variable.
68+
1. Select **Product** > **Scheme** > **Edit scheme**.
69+
1. Select **Arguments** on the **Run** (Debug Run) page.
70+
1. Under **Environment Variables** select the plus (+) sign to add a new environment variable.
7171
1. Enter `SPEECH_KEY` for the **Name** and enter your Speech resource key for the **Value**.
7272

7373
Repeat the steps to set other required environment variables.

articles/ai-services/speech-service/includes/how-to/compressed-audio-input/gstreamer-linux.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,9 @@ gstreamer1.0-plugins-ugly
2020

2121
# [RHEL/CentOS](#tab/centos)
2222

23+
> [!CAUTION]
24+
> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
25+
2326
```sh
2427
sudo yum install gstreamer1 \
2528
gstreamer1-plugins-base \
@@ -29,7 +32,7 @@ gstreamer1-plugins-ugly-free
2932
```
3033

3134
> [!NOTE]
32-
> On RHEL/CentOS 7 and RHEL/CentOS 8, in case of using "ANY" compressed format, more GStreamer plug-ins need to be installed if the stream media format plug-in isn't in the preceding installed plug-ins.
35+
> On RHEL/CentOS 7 and RHEL/CentOS 8, in case of using "ANY" compressed format, more GStreamer plug-ins need to be installed if the stream media format plug-in isn't in the preceding installed plug-ins.
3336
3437
---
3538

articles/ai-services/speech-service/includes/how-to/recognize-speech/cpp.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -196,9 +196,9 @@ speechConfig->SetSpeechRecognitionLanguage("de-DE");
196196
197197
## Language identification
198198
199-
You can use [language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-to-text) with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
199+
You can use [language identification](../../../language-identification.md?pivots=programming-language-cpp#use-speech-to-text) with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
200200
201-
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-cpp#speech-to-text).
201+
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-cpp#use-speech-to-text).
202202
203203
## Use a custom endpoint
204204

articles/ai-services/speech-service/includes/how-to/recognize-speech/csharp.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -271,9 +271,9 @@ The [`SpeechRecognitionLanguage`](/dotnet/api/microsoft.cognitiveservices.speech
271271

272272
## Language identification
273273

274-
You can use [language identification](../../../language-identification.md?pivots=programming-language-csharp#speech-to-text) with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
274+
You can use [language identification](../../../language-identification.md?pivots=programming-language-csharp#use-speech-to-text) with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
275275

276-
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-csharp#speech-to-text).
276+
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-csharp#use-speech-to-text).
277277

278278
## Use a custom endpoint
279279

articles/ai-services/speech-service/includes/how-to/recognize-speech/java.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -214,9 +214,9 @@ config.setSpeechRecognitionLanguage("fr-FR");
214214

215215
## Language identification
216216

217-
You can use [language identification](../../../language-identification.md?pivots=programming-language-java#speech-to-text) with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
217+
You can use [language identification](../../../language-identification.md?pivots=programming-language-java#use-speech-to-text) with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
218218

219-
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-java#speech-to-text).
219+
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-java#use-speech-to-text).
220220

221221
## Use a custom endpoint
222222

articles/ai-services/speech-service/includes/how-to/recognize-speech/javascript.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -194,9 +194,9 @@ The [`speechRecognitionLanguage`](/javascript/api/microsoft-cognitiveservices-sp
194194

195195
## Language identification
196196

197-
You can use [language identification](../../../language-identification.md?pivots=programming-language-javascript#speech-to-text) with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
197+
You can use [language identification](../../../language-identification.md?pivots=programming-language-javascript#use-speech-to-text) with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
198198

199-
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-javascript#speech-to-text).
199+
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-javascript#use-speech-to-text).
200200

201201
## Use a custom endpoint
202202

0 commit comments

Comments
 (0)