Skip to content

Commit a6bd1da

Browse files
authored
Merge pull request #214989 from pritamso/Broken-link-fix-eric-urban
Broken link fixed
2 parents 3660aab + a194aef commit a6bd1da

File tree

7 files changed

+12
-12
lines changed

7 files changed

+12
-12
lines changed

articles/cognitive-services/Speech-Service/batch-transcription-get.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -189,7 +189,7 @@ By default, the results are stored in a container managed by Microsoft. When the
189189
::: zone pivot="rest-api"
190190

191191

192-
The [GetTranscriptionsFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionsFiles) operation returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
192+
The [GetTranscriptionFiles](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetTranscriptionFiles) operation returns a list of result files for a transcription. A [transcription report](#transcription-report-file) file is provided for each submitted batch transcription job. In addition, one [transcription](#transcription-result-file) file (the end result) is provided for each successfully transcribed audio file.
193193

194194
Make an HTTP GET request using the "files" URI from the previous response body. Replace `YourTranscriptionId` with your transcription ID, replace `YourSubscriptionKey` with your Speech resource key, and replace `YourServiceRegion` with your Speech resource region.
195195

@@ -366,4 +366,4 @@ Depending in part on the request parameters set when you created the transcripti
366366

367367
- [Batch transcription overview](batch-transcription.md)
368368
- [Locate audio files for batch transcription](batch-transcription-audio-data.md)
369-
- [Create a batch transcription](batch-transcription-create.md)
369+
- [Create a batch transcription](batch-transcription-create.md)

articles/cognitive-services/Speech-Service/devices-sdk-release-notes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ The following sections list changes in the most recent releases.
3434
## Speech Devices SDK 1.11.0:
3535

3636
- Support for arbitrary microphone array geometries and setting the working angle through a [configuration file](https://aka.ms/sdsdk-micarray-json).
37-
- Support for [Urbetter DDK](http://www.urbetter.com/products_56/278.html).
37+
- Support for [Urbetter DDK](https://urbetters.com/collections).
3838
- Released binaries for the [GGEC Speaker](https://aka.ms/sdsdk-download-speaker) used in our [Voice Assistant sample](https://aka.ms/sdsdk-speaker).
3939
- Released binaries for [Linux ARM32](https://aka.ms/sdsdk-download-linux-arm32) and [Linux ARM 64](https://aka.ms/sdsdk-download-linux-arm64) for Raspberry Pi and similar devices.
4040
- Updated the [Speech SDK](./speech-sdk.md) component to version 1.11.0. For more information, see its [release notes](./releasenotes.md).

articles/cognitive-services/Speech-Service/faq-stt.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -87,14 +87,14 @@ sections:
8787
- question: |
8888
Can I copy or move my datasets, models, and deployments to another region or subscription?
8989
answer: |
90-
You can use the [REST API](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/Models_CopyToToSubscription) to copy a custom model to another region or subscription. Datasets and deployments can't be copied. You can import a dataset again in another subscription and create endpoints there by using the model copies.
90+
You can use the [REST API](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) to copy a custom model to another region or subscription. Datasets and deployments can't be copied. You can import a dataset again in another subscription and create endpoints there by using the model copies.
9191
9292
- question: |
9393
Are my requests logged?
9494
answer: |
9595
By default, requests aren't logged (neither audio nor transcription). If necessary, you can select the **Log content from this endpoint** option when you [create a custom endpoint](how-to-custom-speech-deploy-model.md#add-a-deployment-endpoint). You can also enable audio logging in the [Speech SDK](how-to-use-logging.md) on a per-request basis, without having to create a custom endpoint. In both cases, audio and recognition results of requests will be stored in secure storage. Subscriptions that use Microsoft-owned storage will be available for 30 days.
9696
97-
You can export the logged files on the deployment page in Speech Studio if you use a custom endpoint with **Log content from this endpoint** enabled. If audio logging is enabled via the SDK, call the [API](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/Endpoints_ListBaseModelLogs) to access the files. You can also use API to [delete the logs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/Endpoints_DeleteBaseModelLogs) any time.
97+
You can export the logged files on the deployment page in Speech Studio if you use a custom endpoint with **Log content from this endpoint** enabled. If audio logging is enabled via the SDK, call the API to access the files. You can also use API to [delete the logs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/DeleteBaseModelLog) any time.
9898
9999
- question: |
100100
Are my requests throttled?

articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Training with plain text or structured text usually finishes within a few minute
4545
>
4646
> Start with small sets of sample data that match the language, acoustics, and hardware where your model will be used. Small datasets of representative data can expose problems before you invest in gathering larger datasets for training. For sample Custom Speech data, see <a href="https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/sampledata/customspeech" target="_target">this GitHub repository</a>.
4747
48-
If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After the model is trained, you can copy the model to another region as needed with the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) REST API.
48+
If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information. In regions with dedicated hardware for Custom Speech training, the Speech service will use up to 20 hours of your audio training data, and can process about 10 hours of data per day. In other regions, the Speech service uses up to 8 hours of your audio data, and can process about 1 hour of data per day. After the model is trained, you can copy the model to another region as needed with the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) REST API.
4949

5050
## Consider datasets by scenario
5151

articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -230,7 +230,7 @@ Copying a model directly to a project in another region is not supported with th
230230

231231
::: zone pivot="rest-api"
232232

233-
To copy a model to another Speech resource, use the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
233+
To copy a model to another Speech resource, use the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
234234

235235
- Set the required `targetSubscriptionKey` property to the key of the destination Speech resource.
236236

@@ -332,7 +332,7 @@ To connect a new model to a project of the Speech resource where the model was c
332332

333333
- Set the required `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects.
334334

335-
Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
335+
Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
336336

337337
```azurecli-interactive
338338
curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{

articles/cognitive-services/Speech-Service/includes/quickstarts/call-center/azure-prerequisites.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,10 @@ ms.author: eur
99

1010
> [!div class="checklist"]
1111
> * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services)
12-
> * <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne" title="Create a Cognitive Services resource" target="_blank">Create a Cognitive Services multi-service resource</a> in the Azure portal. This quickstart only requires one Cognitive Services [multi-service resource](../../../../cognitive-services-apis-create-account.md?tabs=multiservice#create-a-new-azure-cognitive-services-resource). The sample code allows you to specify separate <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language</a> and <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech</a> resource keys.
12+
> * <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne" title="Create a Cognitive Services resource" target="_blank">Create a Cognitive Services multi-service resource</a> in the Azure portal. This quickstart only requires one Cognitive Services [multi-service resource](/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice#create-a-new-azure-cognitive-services-resource). The sample code allows you to specify separate <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics" title="Create a Language resource" target="_blank">Language</a> and <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech</a> resource keys.
1313
> * Get the resource key and region. After your Cognitive Services resource is deployed, select **Go to resource** to view and manage keys. For more information about Cognitive Services resources, see [Get the keys for your resource](~/articles/cognitive-services/cognitive-services-apis-create-account.md#get-the-keys-for-your-resource).
1414
1515
> [!IMPORTANT]
1616
> This quickstart requires access to [conversation summarization](../../../../language-service/summarization/how-to/conversation-summarization.md). To get access, you must submit an [online request](https://aka.ms/applyforconversationsummarization/) and have it approved.
1717
>
18-
> The `--languageKey` and `--languageEndpoint` values in this quickstart must correspond to a resource that's in one of the regions supported by the [conversation summarization API](https://aka.ms/convsumregions): `eastus`, `northeurope`, and `uksouth`.
18+
> The `--languageKey` and `--languageEndpoint` values in this quickstart must correspond to a resource that's in one of the regions supported by the [conversation summarization API](https://aka.ms/convsumregions): `eastus`, `northeurope`, and `uksouth`.

articles/cognitive-services/Speech-Service/resiliency-and-recovery-plan.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ You should create Speech Service resources in both a main and a secondary region
6969
Custom Speech Service doesn't support automatic failover. We suggest the following steps to prepare for manual or automatic failover implemented in your client code. In these steps, you replicate custom models in a secondary region. With this preparation, your client code can switch to a secondary region when the primary region fails.
7070

7171
1. Create your custom model in one main region (Primary).
72-
2. Run the [CopyModelToSubscriptionToSubscription](https://eastus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) operation to replicate the custom model to all prepared regions (Secondary).
72+
2. Run the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription) operation to replicate the custom model to all prepared regions (Secondary).
7373
3. Go to Speech Studio to load the copied model and create a new endpoint in the secondary region. See how to deploy a new model in [Deploy a Custom Speech model](./how-to-custom-speech-deploy-model.md).
7474
- If you have set a specific quota, also consider setting the same quota in the backup regions. See details in [Speech service Quotas and Limits](./speech-services-quotas-and-limits.md).
7575
4. Configure your client to fail over on persistent errors as with the default endpoints usage.
@@ -117,4 +117,4 @@ Check the [public voices available](language-support.md?tabs=stt-tts). You can a
117117

118118
Speaker Recognition uses [Azure paired regions](../../availability-zones/cross-region-replication-azure.md) to automatically fail over operations. Speaker enrollments and voice signatures are backed up regularly to prevent data loss and to be used if there's an outage.
119119

120-
During an outage, Speaker Recognition service will automatically fail over to a paired region and use the backed-up data to continue processing requests until the main region is back online.
120+
During an outage, Speaker Recognition service will automatically fail over to a paired region and use the backed-up data to continue processing requests until the main region is back online.

0 commit comments

Comments
 (0)