Skip to content

Commit 7374113

Browse files
authored
Merge pull request #284841 from MicrosoftDocs/main
8/16 11:00 AM IST Publish
2 parents 8f0aa1e + d96f3b7 commit 7374113

File tree

56 files changed

+1274
-943
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

56 files changed

+1274
-943
lines changed

.openpublishing.redirection.json

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4544,6 +4544,16 @@
45444544
"source_path_from_root": "/articles/virtual-network/template-samples.md",
45454545
"redirect_url": "/samples/browse/?expanded=azure&products=azure-resource-manager&terms=virtual%20network",
45464546
"redirect_document_id": false
4547+
},
4548+
{
4549+
"source_path_from_root": "/articles/virtual-network/tutorial-restrict-network-access-to-resources-cli.md",
4550+
"redirect_url": "/azure/virtual-network/tutorial-restrict-network-access-to-resources",
4551+
"redirect_document_id": false
4552+
},
4553+
{
4554+
"source_path_from_root": "/articles/virtual-network/tutorial-restrict-network-access-to-resources-powershell.md",
4555+
"redirect_url": "/azure/virtual-network/tutorial-restrict-network-access-to-resources",
4556+
"redirect_document_id": false
45474557
}
45484558
]
45494559
}

articles/ai-services/speech-service/batch-transcription-create.md

Lines changed: 13 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: eric-urban
77
ms.author: eur
88
ms.service: azure-ai-speech
99
ms.topic: how-to
10-
ms.date: 7/16/2024
10+
ms.date: 8/14/2024
1111
zone_pivot_groups: speech-cli-rest
1212
ms.custom: devx-track-csharp
1313
# Customer intent: As a user who implements audio transcription, I want create transcriptions in bulk so that I don't have to submit audio content repeatedly.
@@ -18,7 +18,7 @@ ms.custom: devx-track-csharp
1818
With batch transcriptions, you submit [audio data](batch-transcription-audio-data.md) in a batch. The service transcribes the audio data and stores the results in a storage container. You can then [retrieve the results](batch-transcription-get.md) from the storage container.
1919

2020
> [!IMPORTANT]
21-
> New pricing is in effect for batch transcription by using [Speech to text REST API v3.2](./migrate-v3-1-to-v3-2.md). For more information, see the [pricing guide](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services).
21+
> New pricing is in effect for batch transcription that uses the [speech to text REST API v3.2](./migrate-v3-1-to-v3-2.md). For more information, see the [pricing guide](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services).
2222
2323
## Prerequisites
2424

@@ -28,7 +28,7 @@ You need a standard (S0) Speech resource. Free resources (F0) aren't supported.
2828

2929
::: zone pivot="rest-api"
3030

31-
To create a transcription, use the [Transcriptions_Create](/rest/api/speechtotext/transcriptions/create) operation of the [Speech to text REST API](rest-speech-to-text.md#batch-transcription). Construct the request body according to the following instructions:
31+
To create a batch transcription job, use the [Transcriptions_Create](/rest/api/speechtotext/transcriptions/create) operation of the [speech to text REST API](rest-speech-to-text.md#batch-transcription). Construct the request body according to the following instructions:
3232

3333
- You must set either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).
3434
- Set the required `locale` property. This value should match the expected locale of the audio data to transcribe. You can't change the locale later.
@@ -109,7 +109,7 @@ Call [Transcriptions_Delete](/rest/api/speechtotext/transcriptions/delete)
109109
regularly from the service, after you retrieve the results. Alternatively, set the `timeToLive` property to ensure the eventual deletion of the results.
110110

111111
> [!TIP]
112-
> You can also try the Batch Transcription API using Python on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/batch/python/python-client/main.py).
112+
> You can also try the Batch Transcription API using Python, C#, or Node.js on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch).
113113
114114

115115
::: zone-end
@@ -118,14 +118,14 @@ regularly from the service, after you retrieve the results. Alternatively, set t
118118

119119
To create a transcription, use the `spx batch transcription create` command. Construct the request parameters according to the following instructions:
120120

121-
- Set the required `content` parameter. You can specify a semi-colon delimited list of individual files or the URL for an entire container. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).
121+
- Set the required `content` parameter. You can specify a comma delimited list of individual files or the URL for an entire container. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).
122122
- Set the required `language` property. This value should match the expected locale of the audio data to transcribe. You can't change the locale later. The Speech CLI `language` parameter corresponds to the `locale` property in the JSON request and response.
123123
- Set the required `name` property. Choose a transcription name that you can refer to later. The transcription name doesn't have to be unique and can be changed later. The Speech CLI `name` parameter corresponds to the `displayName` property in the JSON request and response.
124124

125125
Here's an example Speech CLI command that creates a transcription job:
126126

127127
```azurecli
128-
spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav
128+
spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav,https://crbn.us/whatstheweatherlike.wav
129129
```
130130

131131
You should receive a response body in the following format:
@@ -236,7 +236,7 @@ curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
236236
::: zone pivot="speech-cli"
237237

238238
```azurecli
239-
spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/5988d691-0893-472c-851e-8e36a0fe7aaf"
239+
spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav,https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/5988d691-0893-472c-851e-8e36a0fe7aaf"
240240
```
241241

242242
::: zone-end
@@ -260,7 +260,7 @@ To use a Whisper model for batch transcription, you need to set the `model` prop
260260
> [!IMPORTANT]
261261
> For Whisper models, you should always use [version 3.2](./migrate-v3-1-to-v3-2.md) of the speech to text API.
262262
263-
Whisper models by batch transcription are supported in the Australia East, Central US, East US, North Central US, South Central US, Southeast Asia, and West Europe regions.
263+
Batch transcription using Whisper models is supported in the Australia East, Central US, East US, North Central US, South Central US, Southeast Asia, and West Europe regions.
264264

265265
::: zone pivot="rest-api"
266266
You can make a [Models_ListBaseModels](/rest/api/speechtotext/models/list-base-models) request to get available base models for all locales.
@@ -323,10 +323,10 @@ The `displayName` property of a Whisper model contains "Whisper" as shown in thi
323323
},
324324
```
325325

326-
You set the full model URI as shown in this example for the `eastus` region. Replace `YourSubscriptionKey` with your Speech resource key. Replace `eastus` if you're using a different region.
327-
328326
::: zone pivot="rest-api"
329327

328+
You set the full model URI as shown in this example for the `eastus` region. Replace `YourSubscriptionKey` with your Speech resource key. Replace `eastus` if you're using a different region.
329+
330330
```azurecli-interactive
331331
curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
332332
"contentUrls": [
@@ -348,8 +348,10 @@ curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-
348348

349349
::: zone pivot="speech-cli"
350350

351+
You set the full model URI as shown in this example for the `eastus` region. Replace `eastus` if you're using a different region.
352+
351353
```azurecli
352-
spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav;https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/e418c4a9-9937-4db7-b2c9-8afbff72d950" --api-version v3.2
354+
spx batch transcription create --name "My Transcription" --language "en-US" --content https://crbn.us/hello.wav,https://crbn.us/whatstheweatherlike.wav --model "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.2/models/base/e418c4a9-9937-4db7-b2c9-8afbff72d950" --api-version v3.2
353355
```
354356

355357
::: zone-end

articles/ai-services/speech-service/how-to-recognize-speech.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: eric-urban
66
manager: nitinme
77
ms.service: azure-ai-speech
88
ms.topic: how-to
9-
ms.date: 1/21/2024
9+
ms.date: 08/13/2024
1010
ms.author: eur
1111
ms.devlang: cpp
1212
ms.custom: devx-track-extended-java, devx-track-go, devx-track-js, devx-track-python
@@ -56,7 +56,7 @@ keywords: speech to text, speech to text software
5656
[!INCLUDE [CLI include](includes/how-to/recognize-speech/cli.md)]
5757
::: zone-end
5858

59-
## Next steps
59+
## Related content
6060

6161
* [Try the speech to text quickstart](get-started-speech-to-text.md)
6262
* [Improve recognition accuracy with custom speech](custom-speech-overview.md)

articles/ai-services/speech-service/includes/how-to/recognize-speech/cli.md

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 09/01/2023
5+
ms.date: 08/13/2024
66
ms.author: eur
77
---
88

@@ -21,11 +21,11 @@ spx recognize --microphone
2121
> [!NOTE]
2222
> The Speech CLI defaults to English. You can choose a different language [from the speech to text table](../../../../language-support.md?tabs=stt). For example, add `--source de-DE` to recognize German speech.
2323
24-
Speak into the microphone, and you can see transcription of your words into text in real-time. The Speech CLI stops after a period of silence, or when you select **Ctrl+C**.
24+
Speak into the microphone, and you can see transcription of your words into text in real time. The Speech CLI stops after a period of silence, or when you select **Ctrl+C**.
2525

2626
## Recognize speech from a file
2727

28-
The Speech CLI can recognize speech in many file formats and natural languages. In this example, you can use any *.wav* file (16 KHz or 8 KHz, 16-bit, and mono PCM) that contains English speech. Or if you want a quick sample, download the <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/whatstheweatherlike.wav" download="whatstheweatherlike" target="_blank">whatstheweatherlike.wav <span class="docon docon-download x-hidden-focus"></span></a> file, and copy it to the same directory as the Speech CLI binary file.
28+
The Speech CLI can recognize speech in many file formats and natural languages. In this example, you can use any *.wav* file (16 kHz or 8 kHz, 16-bit, and mono PCM) that contains English speech. Or if you want a quick sample, download the file [whatstheweatherlike.wav](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/whatstheweatherlike.wav), and copy it to the same directory as the Speech CLI binary file.
2929

3030
Use the following command to run the Speech CLI to recognize speech found in the audio file:
3131

@@ -42,5 +42,4 @@ The Speech CLI shows a text transcription of the speech on the screen.
4242

4343
Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and region.
4444

45-
For more information about containers, see [Host URLs](../../../speech-container-howto.md#host-urls) in Install and run Speech containers with Docker.
46-
45+
For more information about containers, see Host URLs in [Install and run Speech containers with Docker](../../../speech-container-howto.md#host-urls).

articles/ai-services/speech-service/includes/how-to/recognize-speech/cpp.md

Lines changed: 12 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -2,20 +2,20 @@
22
author: eric-urban
33
ms.service: azure-ai-speech
44
ms.topic: include
5-
ms.date: 09/01/2023
5+
ms.date: 08/13/2024
66
ms.author: eur
77
---
88

99
[!INCLUDE [Header](../../common/cpp.md)]
1010

1111
[!INCLUDE [Introduction](intro.md)]
1212

13-
## Create a speech configuration
13+
## Create a speech configuration instance
1414

15-
To call the Speech service using the Speech SDK, you need to create a [`SpeechConfig`](/cpp/cognitive-services/speech/speechconfig) instance. This class includes information about your subscription, like your key and associated location/region, endpoint, host, or authorization token.
15+
To call the Speech service using the Speech SDK, you need to create a [`SpeechConfig`](/cpp/cognitive-services/speech/speechconfig) instance. This class includes information about your subscription, like your key and associated region, endpoint, host, or authorization token.
1616

17-
1. Create a `SpeechConfig` instance by using your key and region.
18-
1. Create a Speech resource on the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices).
17+
1. Create a Speech resource in the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices). Get the Speech resource key and region.
18+
1. Create a `SpeechConfig` instance by using the following code. Replace `YourSpeechKey` and `YourSpeechRegion` with your Speech resource key and region.
1919

2020
```cpp
2121
using namespace std;
@@ -48,11 +48,11 @@ auto result = speechRecognizer->RecognizeOnceAsync().get();
4848
cout << "RECOGNIZED: Text=" << result->Text << std::endl;
4949
```
5050

51-
If you want to use a *specific* audio input device, you need to specify the device ID in `AudioConfig`. For more information on how to get the device ID for your audio input device, see [Select an audio input device with the Speech SDK](../../../how-to-select-audio-input-devices.md)
51+
If you want to use a *specific* audio input device, you need to specify the device ID in `AudioConfig`. To learn how to get the device ID, see [Select an audio input device with the Speech SDK](../../../how-to-select-audio-input-devices.md).
5252

5353
## Recognize speech from a file
5454

55-
If you want to recognize speech from an audio file instead of using a microphone, you still need to create an `AudioConfig` instance. But for this case you don't call `FromDefaultMicrophoneInput()`. You call `FromWavFileInput()` and pass the file path:
55+
If you want to recognize speech from an audio file instead of using a microphone, you still need to create an `AudioConfig` instance. However, you don't call `FromDefaultMicrophoneInput()`. You call `FromWavFileInput()` and pass the file path:
5656

5757
```cpp
5858
using namespace Microsoft::CognitiveServices::Speech::Audio;
@@ -110,7 +110,7 @@ switch (result->Reason)
110110

111111
## Continuous recognition
112112

113-
Continuous recognition is a bit more involved than single-shot recognition. It requires you to subscribe to the `Recognizing`, `Recognized`, and `Canceled` events to get the recognition results. To stop recognition, you must call [StopContinuousRecognitionAsync](/cpp/cognitive-services/speech/speechrecognizer#stopcontinuousrecognitionasync). Here's an example of how continuous recognition is performed on an audio input file.
113+
Continuous recognition is a bit more involved than single-shot recognition. It requires you to subscribe to the `Recognizing`, `Recognized`, and `Canceled` events to get the recognition results. To stop recognition, you must call [StopContinuousRecognitionAsync](/cpp/cognitive-services/speech/speechrecognizer#stopcontinuousrecognitionasync). Here's an example of continuous recognition performed on an audio input file.
114114

115115
Start by defining the input and initializing [`SpeechRecognizer`](/cpp/cognitive-services/speech/speechrecognizer):
116116

@@ -192,13 +192,13 @@ A common task for speech recognition is specifying the input (or source) languag
192192
speechConfig->SetSpeechRecognitionLanguage("de-DE");
193193
```
194194
195-
[`SetSpeechRecognitionLanguage`](/cpp/cognitive-services/speech/speechconfig#setspeechrecognitionlanguage) is a parameter that takes a string as an argument. For more information, see the [list of supported speech to text locales](../../../language-support.md?tabs=stt).
195+
[`SetSpeechRecognitionLanguage`](/cpp/cognitive-services/speech/speechconfig#setspeechrecognitionlanguage) is a parameter that takes a string as an argument. For a list of supported locales, see [Language and voice support for the Speech service](../../../language-support.md).
196196
197197
## Language identification
198198
199-
You can use [language identification](../../../language-identification.md?pivots=programming-language-cpp#use-speech-to-text) with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
199+
You can use language identification with speech to text recognition when you need to identify the language in an audio source and then transcribe it to text.
200200
201-
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-cpp#use-speech-to-text).
201+
For a complete code sample, see [Language identification](../../../language-identification.md?pivots=programming-language-cpp).
202202
203203
## Use a custom endpoint
204204
@@ -214,5 +214,4 @@ auto speechRecognizer = SpeechRecognizer::FromConfig(speechConfig);
214214

215215
Speech containers provide websocket-based query endpoint APIs that are accessed through the Speech SDK and Speech CLI. By default, the Speech SDK and Speech CLI use the public Speech service. To use the container, you need to change the initialization method. Use a container host URL instead of key and region.
216216

217-
For more information about containers, see [Host URLs](../../../speech-container-howto.md#host-urls) in Install and run Speech containers with Docker.
218-
217+
For more information about containers, see Host URLs in [Install and run Speech containers with Docker](../../../speech-container-howto.md#host-urls).

0 commit comments

Comments
 (0)