Skip to content

Commit 3a13531

Browse files
author
Jill Grant
authored
Merge pull request #382 from eric-urban/eur/speech-refresh
refresh speech docs
2 parents 9b4e946 + 951b2a1 commit 3a13531

14 files changed

+58
-41
lines changed

articles/ai-services/speech-service/how-to-configure-openssl-linux.md

Lines changed: 13 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,17 @@
11
---
22
title: How to configure OpenSSL for Linux
33
titleSuffix: Azure AI services
4-
description: Learn how to configure OpenSSL for Linux.
5-
author: jhakulin
4+
description: In this guide, you learn how to configure OpenSSL for Linux with the Azure AI Speech SDK.
5+
author: eric-urban
6+
ms.author: eur
67
manager: nitinme
78
ms.service: azure-ai-speech
89
ms.custom: devx-track-extended-java, devx-track-go, devx-track-python, linux-related-content
910
ms.topic: how-to
10-
ms.date: 1/18/2024
11-
ms.author: jhakulin
11+
ms.date: 9/19/2024
12+
ms.reviewer: jhakulin
1213
zone_pivot_groups: programming-languages-set-three
14+
#Customer intent: As a developer, I want to learn how to configure OpenSSL for Linux so that I can use the Speech SDK on my Linux system.
1315
---
1416

1517
# Configure OpenSSL for Linux
@@ -19,7 +21,7 @@ With the Speech SDK, [OpenSSL](https://www.openssl.org) is dynamically configure
1921
> [!NOTE]
2022
> This article is only applicable where the Speech SDK is [supported on Linux](speech-sdk.md#supported-languages).
2123
22-
To ensure connectivity, verify that OpenSSL certificates have been installed in your system. Run a command:
24+
To ensure connectivity, verify that OpenSSL certificates are installed in your system. Run a command:
2325
```bash
2426
openssl version -d
2527
```
@@ -29,7 +31,7 @@ The output on Ubuntu/Debian based systems should be:
2931
OPENSSLDIR: "/usr/lib/ssl"
3032
```
3133

32-
Check whether there's a `certs` subdirectory under OPENSSLDIR. In the example above, it would be `/usr/lib/ssl/certs`.
34+
Check whether there's a `certs` subdirectory under OPENSSLDIR. In the previous example, it would be `/usr/lib/ssl/certs`.
3335

3436
* If the `/usr/lib/ssl/certs` exists, and if it contains many individual certificate files (with `.crt` or `.pem` extension), there's no need for further actions.
3537

@@ -52,9 +54,9 @@ export SSL_CERT_FILE=/etc/pki/tls/certs/ca-bundle.crt
5254

5355
## Certificate revocation checks
5456

55-
When the Speech SDK connects to the Speech service, it checks the Transport Layer Security (TLS/SSL) certificate. The Speech SDK verifies that the certificate reported by the remote endpoint is trusted and hasn't been revoked. This verification provides a layer of protection against attacks involving spoofing and other related vectors. The check is accomplished by retrieving a certificate revocation list (CRL) from a certificate authority (CA) used by Azure. A list of Azure CA download locations for updated TLS CRLs can be found in [this document](/azure/security/fundamentals/tls-certificate-changes).
57+
When the Speech SDK connects to the Speech service, it checks the Transport Layer Security (TLS/SSL) certificate. The Speech SDK verifies that the certificate reported by the remote endpoint is trusted and isn't revoked. This verification provides a layer of protection against attacks involving spoofing and other related vectors. The check is accomplished by retrieving a certificate revocation list (CRL) from a certificate authority (CA) used by Azure. A list of Azure CA download locations for updated TLS CRLs can be found in [this document](/azure/security/fundamentals/tls-certificate-changes).
5658

57-
If a destination posing as the Speech service reports a certificate that's been revoked in a retrieved CRL, the SDK terminates the connection and reports an error via a `Canceled` event. The authenticity of a reported certificate can't be checked without an updated CRL. Therefore, the Speech SDK also treats a failure to download a CRL from an Azure CA location as an error.
59+
If a destination posing as the Speech service reports a revoked certificate in a retrieved CRL, the SDK terminates the connection and reports an error via a `Canceled` event. The authenticity of a reported certificate can't be checked without an updated CRL. Therefore, the Speech SDK also treats a failure to download a CRL from an Azure CA location as an error.
5860

5961
> [!WARNING]
6062
> If your solution uses proxy or firewall it should be configured to allow access to all certificate revocation list URLs used by Azure. Note that many of these URLs are outside of `microsoft.com` domain, so allowing access to `*.microsoft.com` is not enough. See [this document](/azure/security/fundamentals/tls-certificate-changes) for details. In exceptional cases you may ignore CRL failures (see [the correspondent section](#bypassing-or-ignoring-crl-failures)), but such configuration is strongly not recommended, especially for production scenarios.
@@ -63,7 +65,7 @@ If a destination posing as the Speech service reports a certificate that's been
6365

6466
One cause of CRL-related failures is the use of large CRL files. This class of error is typically only applicable to special environments with extended CA chains. Standard public endpoints shouldn't encounter this class of issue.
6567

66-
The default maximum CRL size used by the Speech SDK (10 MB) can be adjusted per config object. The property key for this adjustment is `CONFIG_MAX_CRL_SIZE_KB` and the value, specified as a string, is by default "10000" (10 MB). For example, when creating a `SpeechRecognizer` object (that manages a connection to the Speech service), you can set this property in its `SpeechConfig`. In the snippet below, the configuration is adjusted to permit a CRL file size up to 15 MB.
68+
The default maximum CRL size used by the Speech SDK (10 MB) can be adjusted per config object. The property key for this adjustment is `CONFIG_MAX_CRL_SIZE_KB` and the value, specified as a string, is by default "10000" (10 MB). For example, when creating a `SpeechRecognizer` object (that manages a connection to the Speech service), you can set this property in its `SpeechConfig`. In the following code snippet, the configuration is adjusted to permit a CRL file size up to 15 MB.
6769

6870
::: zone pivot="programming-language-csharp"
6971

@@ -155,7 +157,7 @@ speechConfig.properties.SetPropertyByString("OPENSSL_CONTINUE_ON_CRL_DOWNLOAD_FA
155157

156158
::: zone-end
157159

158-
To turn off certificate revocation checks, set the property `"OPENSSL_DISABLE_CRL_CHECK"` to `"true"`. Then, while connecting to the Speech service, there will be no attempt to check or download a CRL and no automatic verification of a reported TLS/SSL certificate.
160+
To turn off certificate revocation checks, set the property `"OPENSSL_DISABLE_CRL_CHECK"` to `"true"`. Then, while connecting to the Speech service, there's no attempt to check or download a CRL and no automatic verification of a reported TLS/SSL certificate.
159161

160162
::: zone pivot="programming-language-csharp"
161163

@@ -203,7 +205,7 @@ By default, the Speech SDK will cache a successfully downloaded CRL on disk to i
203205

204206
Some Linux distributions don't have a `TMP` or `TMPDIR` environment variable defined, so the Speech SDK doesn't cache downloaded CRLs. Without `TMP` or `TMPDIR` environment variable defined, the Speech SDK downloads a new CRL for each connection. To improve initial connection performance in this situation, you can [create a `TMPDIR` environment variable and set it to the accessible path of a temporary directory.](https://help.ubuntu.com/community/EnvironmentVariables).
205207

206-
## Next steps
208+
## Related content
207209

208210
- [Speech SDK overview](speech-sdk.md)
209211
- [Install the Speech SDK](quickstarts/setup-platform.md)

articles/ai-services/speech-service/how-to-control-connections.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2,15 +2,17 @@
22
title: Service connectivity how-to - Speech SDK
33
titleSuffix: Azure AI services
44
description: Learn how to monitor for connection status and manually connect or disconnect from the Speech service.
5-
author: trrwilson
5+
author: eric-urban
6+
ms.author: eur
67
manager: nitinme
78
ms.service: azure-ai-speech
89
ms.topic: how-to
9-
ms.date: 1/18/2024
10-
ms.author: travisw
10+
ms.date: 9/19/2024
11+
ms.reviewer: travisw
1112
zone_pivot_groups: programming-languages-set-thirteen
1213
ms.devlang: cpp
1314
ms.custom: devx-track-csharp, devx-track-extended-java
15+
#Customer intent: As a developer, I want to learn how to monitor and control service connections with the Speech SDK so that I can manage connections to the Speech service.
1416
---
1517

1618
# How to monitor and control service connections with the Speech SDK
@@ -96,14 +98,14 @@ connection.disconnected.addEventListener((s, connectionEventArgs) -> {
9698

9799
`Connection` has explicit methods to start or end a connection to the Speech service. Reasons you might want to control the connection include:
98100

99-
- Preconnecting to the Speech service to allow the first interaction to start as quickly as possible
100-
- Establishing connection at a specific time in your application's logic to gracefully and predictably handle initial connection failures
101-
- Disconnecting to clear an idle connection when you don't expect immediate reconnection but also don't want to destroy the object
101+
- Preconnecting to the Speech service to allow the first interaction to start as quickly as possible.
102+
- Establishing connection at a specific time in your application's logic to gracefully and predictably handle initial connection failures.
103+
- Disconnecting to clear an idle connection when you don't expect immediate reconnection but also don't want to destroy the object.
102104

103105
Some important notes on the behavior when manually modifying connection state:
104106

105-
- Trying to connect when already connected will do nothing. It will not generate an error. Monitor the `Connected` and `Disconnected` events if you want to know the current state of the connection.
106-
- A failure to connect that originates from a problem that has no involvement with the Speech service--such as attempting to do so from an invalid state--will throw or return an error as appropriate to the programming language. Failures that require network resolution--such as authentication failures--won't throw or return an error but instead generate a `Canceled` event on the top-level object the `Connection` was created from.
107+
- Trying to connect when already connected doesn't generate an error. Monitor the `Connected` and `Disconnected` events if you want to know the current state of the connection.
108+
- A failure to connect that originates from a problem that has no involvement with the Speech service--such as attempting to do so from an invalid state--results in an error as appropriate to the programming language. Failures that require network resolution--such as authentication failures--doesn't result in an error but instead generate a `Canceled` event on the top-level object the `Connection` was created from.
107109
- Manually disconnecting from the Speech service during an ongoing interaction results in a connection error and loss of data for that interaction. Connection errors are surfaced on the appropriate top-level object's `Canceled` event.
108110

109111
::: zone pivot="programming-language-csharp"

articles/ai-services/speech-service/how-to-custom-speech-continuous-integration-continuous-deployment.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -2,12 +2,13 @@
22
title: CI/CD for custom speech - Speech service
33
titleSuffix: Azure AI services
44
description: Apply DevOps with custom speech and CI/CD workflows. Implement an existing DevOps solution for your own project.
5-
author: nitinme
6-
manager: cmayomsft
5+
author: eric-urban
6+
ms.author: eur
7+
manager: nitinme
78
ms.service: azure-ai-speech
89
ms.topic: how-to
9-
ms.date: 1/19/2024
10-
ms.author: nitinme
10+
ms.date: 9/19/2024
11+
#Customer intent: As a developer, I want to learn how to apply DevOps with custom speech and CI/CD workflows so that I can implement an existing DevOps solution for my own project.
1112
---
1213

1314
# CI/CD for custom speech
@@ -18,15 +19,15 @@ Implement automated training, testing, and release management to enable continuo
1819

1920
[Continuous delivery](/devops/deliver/what-is-continuous-delivery) (CD) takes models from the CI process and creates an endpoint for each improved custom speech model. CD makes endpoints easily available to be integrated into solutions.
2021

21-
Custom CI/CD solutions are possible, but for a robust, pre-built solution, use the [Speech DevOps template repository](https://github.com/Azure-Samples/Speech-Service-DevOps-Template), which executes CI/CD workflows using GitHub Actions.
22+
Custom CI/CD solutions are possible, but for a robust, prebuilt solution, use the [Speech DevOps template repository](https://github.com/Azure-Samples/Speech-Service-DevOps-Template), which executes CI/CD workflows using GitHub Actions.
2223

2324
## CI/CD workflows for custom speech
2425

25-
The purpose of these workflows is to ensure that each custom speech model has better recognition accuracy than the previous build. If the updates to the testing and/or training data improve the accuracy, these workflows create a new custom speech endpoint.
26+
The purpose of these workflows is to ensure that each custom speech model has better recognition accuracy than the previous build. If the updates to the testing or training data improve the accuracy, these workflows create a new custom speech endpoint.
2627

2728
Git servers such as GitHub and Azure DevOps can run automated workflows when specific Git events happen, such as merges or pull requests. For example, a CI workflow can be triggered when updates to testing data are pushed to the *main* branch. Different Git Servers have different tooling, but allow scripting command-line interface (CLI) commands so that they can execute on a build server.
2829

29-
Along the way, the workflows should name and store data, tests, test files, models, and endpoints such that they can be traced back to the commit or version they came from. It's also helpful to name these assets so that it's easy to see which were created after updating testing data versus training data.
30+
The workflows should name and store data, tests, test files, models, and endpoints such that they can be traced back to the commit or version they came from. It's also helpful to name these assets so that it's easy to see which were created after updating testing data versus training data.
3031

3132
### CI workflow for testing data updates
3233

articles/ai-services/speech-service/how-to-custom-speech-create-project.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,10 @@ author: eric-urban
66
manager: nitinme
77
ms.service: azure-ai-speech
88
ms.topic: how-to
9-
ms.date: 7/15/2024
9+
ms.date: 9/19/2024
1010
ms.author: eur
1111
zone_pivot_groups: speech-studio-cli-rest
12+
#Customer intent: As a developer, I want to learn how to create a project for custom speech so that I can train and deploy a custom model.
1213
---
1314

1415
# Create a custom speech project

articles/ai-services/speech-service/how-to-custom-speech-deploy-model.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,10 @@ author: eric-urban
66
manager: nitinme
77
ms.service: azure-ai-speech
88
ms.topic: how-to
9-
ms.date: 7/15/2024
9+
ms.date: 9/19/2024
1010
ms.author: eur
1111
zone_pivot_groups: speech-studio-cli-rest
12+
#Customer intent: As a developer, I want to learn how to deploy a custom speech model so that I can use it in my applications.
1213
---
1314

1415
# Deploy a custom speech model
@@ -385,7 +386,7 @@ The locations of each log file with more details are returned in the response bo
385386

386387
Logging data is available on Microsoft-owned storage for 30 days, and then it's removed. If your own storage account is linked to the Azure AI services subscription, the logging data isn't automatically deleted.
387388

388-
## Next steps
389+
## Related content
389390

390391
- [CI/CD for custom speech](how-to-custom-speech-continuous-integration-continuous-deployment.md)
391392
- [Custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md)

articles/ai-services/speech-service/how-to-custom-speech-display-text-format.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,9 @@ author: eric-urban
66
manager: nitinme
77
ms.service: azure-ai-speech
88
ms.topic: how-to
9-
ms.date: 1/19/2024
9+
ms.date: 9/19/2024
1010
ms.author: eur
11+
#Customer intent: As a developer, I want to learn how to prepare display text format training data for custom speech so that I can customize the display text formatting pipeline for my specific scenarios.
1112
---
1213

1314
# How to prepare display text format training data for custom speech

articles/ai-services/speech-service/how-to-custom-speech-evaluate-data.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,12 @@ author: eric-urban
66
manager: nitinme
77
ms.service: azure-ai-speech
88
ms.topic: how-to
9-
ms.date: 7/15/2024
9+
ms.date: 9/19/2024
1010
ms.author: eur
1111
zone_pivot_groups: speech-studio-cli-rest
1212
show_latex: true
1313
no-loc: [$$, '\times', '\over']
14+
#Customer intent: As a developer, I want to test the accuracy of a custom speech model so that I can evaluate whether it meets my requirements.
1415
---
1516

1617
# Test accuracy of a custom speech model

articles/ai-services/speech-service/how-to-custom-speech-human-labeled-transcriptions.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,9 @@ author: eric-urban
66
manager: nitinme
77
ms.service: azure-ai-speech
88
ms.topic: how-to
9-
ms.date: 1/19/2024
9+
ms.date: 9/19/2024
1010
ms.author: eur
11+
#Customer intent: As a developer, I need to understand how to create human-labeled transcriptions for my audio data so that I can improve speech recognition accuracy.
1112
---
1213

1314
# How to create human-labeled transcriptions

articles/ai-services/speech-service/how-to-custom-speech-inspect-data.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,10 @@ author: eric-urban
66
manager: nitinme
77
ms.service: azure-ai-speech
88
ms.topic: how-to
9-
ms.date: 7/15/2024
9+
ms.date: 9/19/2024
1010
ms.author: eur
1111
zone_pivot_groups: speech-studio-cli-rest
12+
#Customer intent: As a developer, I want to test the recognition quality of a custom speech model so that I can determine if the provided recognition result is correct.
1213
---
1314

1415
# Test recognition quality of a custom speech model

articles/ai-services/speech-service/how-to-custom-speech-model-and-endpoint-lifecycle.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,15 @@
22
title: Model lifecycle of custom speech - Speech service
33
titleSuffix: Azure AI services
44
description: Custom speech provides base models for training and lets you create custom models from your data. This article describes the timelines for models and for endpoints that use these models.
5-
author: heikora
6-
manager: dongli
5+
author: eric-urban
6+
manager: nitinme
7+
ms.author: eur
78
ms.service: azure-ai-speech
89
ms.topic: how-to
9-
ms.date: 1/19/2024
10-
ms.author: heikora
10+
ms.date: 9/19/2024
11+
ms.reviewer: heikora
1112
zone_pivot_groups: speech-studio-cli-rest
13+
#Customer intent: As a developer, I want to understand the lifecycle of custom speech models and endpoints so that I can plan for the expiration of my models.
1214
---
1315

1416
# Custom speech model lifecycle

0 commit comments

Comments
 (0)