You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#Customer intent: As a developer, I want to learn how to configure OpenSSL for Linux so that I can use the Speech SDK on my Linux system.
13
15
---
14
16
15
17
# Configure OpenSSL for Linux
@@ -19,7 +21,7 @@ With the Speech SDK, [OpenSSL](https://www.openssl.org) is dynamically configure
19
21
> [!NOTE]
20
22
> This article is only applicable where the Speech SDK is [supported on Linux](speech-sdk.md#supported-languages).
21
23
22
-
To ensure connectivity, verify that OpenSSL certificates have been installed in your system. Run a command:
24
+
To ensure connectivity, verify that OpenSSL certificates are installed in your system. Run a command:
23
25
```bash
24
26
openssl version -d
25
27
```
@@ -29,7 +31,7 @@ The output on Ubuntu/Debian based systems should be:
29
31
OPENSSLDIR: "/usr/lib/ssl"
30
32
```
31
33
32
-
Check whether there's a `certs` subdirectory under OPENSSLDIR. In the example above, it would be `/usr/lib/ssl/certs`.
34
+
Check whether there's a `certs` subdirectory under OPENSSLDIR. In the previous example, it would be `/usr/lib/ssl/certs`.
33
35
34
36
* If the `/usr/lib/ssl/certs` exists, and if it contains many individual certificate files (with `.crt` or `.pem` extension), there's no need for further actions.
When the Speech SDK connects to the Speech service, it checks the Transport Layer Security (TLS/SSL) certificate. The Speech SDK verifies that the certificate reported by the remote endpoint is trusted and hasn't been revoked. This verification provides a layer of protection against attacks involving spoofing and other related vectors. The check is accomplished by retrieving a certificate revocation list (CRL) from a certificate authority (CA) used by Azure. A list of Azure CA download locations for updated TLS CRLs can be found in [this document](/azure/security/fundamentals/tls-certificate-changes).
57
+
When the Speech SDK connects to the Speech service, it checks the Transport Layer Security (TLS/SSL) certificate. The Speech SDK verifies that the certificate reported by the remote endpoint is trusted and isn't revoked. This verification provides a layer of protection against attacks involving spoofing and other related vectors. The check is accomplished by retrieving a certificate revocation list (CRL) from a certificate authority (CA) used by Azure. A list of Azure CA download locations for updated TLS CRLs can be found in [this document](/azure/security/fundamentals/tls-certificate-changes).
56
58
57
-
If a destination posing as the Speech service reports a certificate that's been revoked in a retrieved CRL, the SDK terminates the connection and reports an error via a `Canceled` event. The authenticity of a reported certificate can't be checked without an updated CRL. Therefore, the Speech SDK also treats a failure to download a CRL from an Azure CA location as an error.
59
+
If a destination posing as the Speech service reports a revoked certificate in a retrieved CRL, the SDK terminates the connection and reports an error via a `Canceled` event. The authenticity of a reported certificate can't be checked without an updated CRL. Therefore, the Speech SDK also treats a failure to download a CRL from an Azure CA location as an error.
58
60
59
61
> [!WARNING]
60
62
> If your solution uses proxy or firewall it should be configured to allow access to all certificate revocation list URLs used by Azure. Note that many of these URLs are outside of `microsoft.com` domain, so allowing access to `*.microsoft.com` is not enough. See [this document](/azure/security/fundamentals/tls-certificate-changes) for details. In exceptional cases you may ignore CRL failures (see [the correspondent section](#bypassing-or-ignoring-crl-failures)), but such configuration is strongly not recommended, especially for production scenarios.
@@ -63,7 +65,7 @@ If a destination posing as the Speech service reports a certificate that's been
63
65
64
66
One cause of CRL-related failures is the use of large CRL files. This class of error is typically only applicable to special environments with extended CA chains. Standard public endpoints shouldn't encounter this class of issue.
65
67
66
-
The default maximum CRL size used by the Speech SDK (10 MB) can be adjusted per config object. The property key for this adjustment is `CONFIG_MAX_CRL_SIZE_KB` and the value, specified as a string, is by default "10000" (10 MB). For example, when creating a `SpeechRecognizer` object (that manages a connection to the Speech service), you can set this property in its `SpeechConfig`. In the snippet below, the configuration is adjusted to permit a CRL file size up to 15 MB.
68
+
The default maximum CRL size used by the Speech SDK (10 MB) can be adjusted per config object. The property key for this adjustment is `CONFIG_MAX_CRL_SIZE_KB` and the value, specified as a string, is by default "10000" (10 MB). For example, when creating a `SpeechRecognizer` object (that manages a connection to the Speech service), you can set this property in its `SpeechConfig`. In the following code snippet, the configuration is adjusted to permit a CRL file size up to 15 MB.
Toturnoffcertificaterevocationchecks, settheproperty `"OPENSSL_DISABLE_CRL_CHECK"` to `"true"`. Then, whileconnectingtotheSpeechservice, therewillbenoattempttocheckordownloadaCRLandnoautomaticverificationofareportedTLS/SSLcertificate.
160
+
Toturnoffcertificaterevocationchecks, settheproperty `"OPENSSL_DISABLE_CRL_CHECK"` to `"true"`. Then, whileconnectingtotheSpeechservice, there's no attempt to check or download a CRL and no automatic verification of a reported TLS/SSL certificate.
159
161
160
162
:::zonepivot="programming-language-csharp"
161
163
@@ -203,7 +205,7 @@ By default, the Speech SDK will cache a successfully downloaded CRL on disk to i
203
205
204
206
SomeLinuxdistributionsdon'thavea `TMP` or `TMPDIR` environmentvariabledefined, sotheSpeechSDKdoesn't cache downloaded CRLs. Without `TMP` or `TMPDIR` environment variable defined, the Speech SDK downloads a new CRL for each connection. To improve initial connection performance in this situation, you can [create a `TMPDIR` environment variable and set it to the accessible path of a temporary directory.](https://help.ubuntu.com/community/EnvironmentVariables).
#Customer intent: As a developer, I want to learn how to monitor and control service connections with the Speech SDK so that I can manage connections to the Speech service.
14
16
---
15
17
16
18
# How to monitor and control service connections with the Speech SDK
`Connection` has explicit methods to start or end a connection to the Speech service. Reasons you might want to control the connection include:
98
100
99
-
- Preconnecting to the Speech service to allow the first interaction to start as quickly as possible
100
-
- Establishing connection at a specific time in your application's logic to gracefully and predictably handle initial connection failures
101
-
- Disconnecting to clear an idle connection when you don't expect immediate reconnection but also don't want to destroy the object
101
+
- Preconnecting to the Speech service to allow the first interaction to start as quickly as possible.
102
+
- Establishing connection at a specific time in your application's logic to gracefully and predictably handle initial connection failures.
103
+
- Disconnecting to clear an idle connection when you don't expect immediate reconnection but also don't want to destroy the object.
102
104
103
105
Some important notes on the behavior when manually modifying connection state:
104
106
105
-
- Trying to connect when already connected will do nothing. It will not generate an error. Monitor the `Connected` and `Disconnected` events if you want to know the current state of the connection.
106
-
- A failure to connect that originates from a problem that has no involvement with the Speech service--such as attempting to do so from an invalid state--will throw or return an error as appropriate to the programming language. Failures that require network resolution--such as authentication failures--won't throw or return an error but instead generate a `Canceled` event on the top-level object the `Connection` was created from.
107
+
- Trying to connect when already connected doesn't generate an error. Monitor the `Connected` and `Disconnected` events if you want to know the current state of the connection.
108
+
- A failure to connect that originates from a problem that has no involvement with the Speech service--such as attempting to do so from an invalid state--results in an error as appropriate to the programming language. Failures that require network resolution--such as authentication failures--doesn't result in an error but instead generate a `Canceled` event on the top-level object the `Connection` was created from.
107
109
- Manually disconnecting from the Speech service during an ongoing interaction results in a connection error and loss of data for that interaction. Connection errors are surfaced on the appropriate top-level object's `Canceled` event.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/how-to-custom-speech-continuous-integration-continuous-deployment.md
+8-7Lines changed: 8 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,12 +2,13 @@
2
2
title: CI/CD for custom speech - Speech service
3
3
titleSuffix: Azure AI services
4
4
description: Apply DevOps with custom speech and CI/CD workflows. Implement an existing DevOps solution for your own project.
5
-
author: nitinme
6
-
manager: cmayomsft
5
+
author: eric-urban
6
+
ms.author: eur
7
+
manager: nitinme
7
8
ms.service: azure-ai-speech
8
9
ms.topic: how-to
9
-
ms.date: 1/19/2024
10
-
ms.author: nitinme
10
+
ms.date: 9/19/2024
11
+
#Customer intent: As a developer, I want to learn how to apply DevOps with custom speech and CI/CD workflows so that I can implement an existing DevOps solution for my own project.
11
12
---
12
13
13
14
# CI/CD for custom speech
@@ -18,15 +19,15 @@ Implement automated training, testing, and release management to enable continuo
18
19
19
20
[Continuous delivery](/devops/deliver/what-is-continuous-delivery) (CD) takes models from the CI process and creates an endpoint for each improved custom speech model. CD makes endpoints easily available to be integrated into solutions.
20
21
21
-
Custom CI/CD solutions are possible, but for a robust, pre-built solution, use the [Speech DevOps template repository](https://github.com/Azure-Samples/Speech-Service-DevOps-Template), which executes CI/CD workflows using GitHub Actions.
22
+
Custom CI/CD solutions are possible, but for a robust, prebuilt solution, use the [Speech DevOps template repository](https://github.com/Azure-Samples/Speech-Service-DevOps-Template), which executes CI/CD workflows using GitHub Actions.
22
23
23
24
## CI/CD workflows for custom speech
24
25
25
-
The purpose of these workflows is to ensure that each custom speech model has better recognition accuracy than the previous build. If the updates to the testing and/or training data improve the accuracy, these workflows create a new custom speech endpoint.
26
+
The purpose of these workflows is to ensure that each custom speech model has better recognition accuracy than the previous build. If the updates to the testing or training data improve the accuracy, these workflows create a new custom speech endpoint.
26
27
27
28
Git servers such as GitHub and Azure DevOps can run automated workflows when specific Git events happen, such as merges or pull requests. For example, a CI workflow can be triggered when updates to testing data are pushed to the *main* branch. Different Git Servers have different tooling, but allow scripting command-line interface (CLI) commands so that they can execute on a build server.
28
29
29
-
Along the way, the workflows should name and store data, tests, test files, models, and endpoints such that they can be traced back to the commit or version they came from. It's also helpful to name these assets so that it's easy to see which were created after updating testing data versus training data.
30
+
The workflows should name and store data, tests, test files, models, and endpoints such that they can be traced back to the commit or version they came from. It's also helpful to name these assets so that it's easy to see which were created after updating testing data versus training data.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/how-to-custom-speech-deploy-model.md
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,9 +6,10 @@ author: eric-urban
6
6
manager: nitinme
7
7
ms.service: azure-ai-speech
8
8
ms.topic: how-to
9
-
ms.date: 7/15/2024
9
+
ms.date: 9/19/2024
10
10
ms.author: eur
11
11
zone_pivot_groups: speech-studio-cli-rest
12
+
#Customer intent: As a developer, I want to learn how to deploy a custom speech model so that I can use it in my applications.
12
13
---
13
14
14
15
# Deploy a custom speech model
@@ -385,7 +386,7 @@ The locations of each log file with more details are returned in the response bo
385
386
386
387
Logging data is available on Microsoft-owned storage for 30 days, and then it's removed. If your own storage account is linked to the Azure AI services subscription, the logging data isn't automatically deleted.
387
388
388
-
## Next steps
389
+
## Related content
389
390
390
391
-[CI/CD for custom speech](how-to-custom-speech-continuous-integration-continuous-deployment.md)
391
392
-[Custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md)
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/how-to-custom-speech-display-text-format.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,8 +6,9 @@ author: eric-urban
6
6
manager: nitinme
7
7
ms.service: azure-ai-speech
8
8
ms.topic: how-to
9
-
ms.date: 1/19/2024
9
+
ms.date: 9/19/2024
10
10
ms.author: eur
11
+
#Customer intent: As a developer, I want to learn how to prepare display text format training data for custom speech so that I can customize the display text formatting pipeline for my specific scenarios.
11
12
---
12
13
13
14
# How to prepare display text format training data for custom speech
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/how-to-custom-speech-human-labeled-transcriptions.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,8 +6,9 @@ author: eric-urban
6
6
manager: nitinme
7
7
ms.service: azure-ai-speech
8
8
ms.topic: how-to
9
-
ms.date: 1/19/2024
9
+
ms.date: 9/19/2024
10
10
ms.author: eur
11
+
#Customer intent: As a developer, I need to understand how to create human-labeled transcriptions for my audio data so that I can improve speech recognition accuracy.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/how-to-custom-speech-inspect-data.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,9 +6,10 @@ author: eric-urban
6
6
manager: nitinme
7
7
ms.service: azure-ai-speech
8
8
ms.topic: how-to
9
-
ms.date: 7/15/2024
9
+
ms.date: 9/19/2024
10
10
ms.author: eur
11
11
zone_pivot_groups: speech-studio-cli-rest
12
+
#Customer intent: As a developer, I want to test the recognition quality of a custom speech model so that I can determine if the provided recognition result is correct.
12
13
---
13
14
14
15
# Test recognition quality of a custom speech model
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/how-to-custom-speech-model-and-endpoint-lifecycle.md
+6-4Lines changed: 6 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,13 +2,15 @@
2
2
title: Model lifecycle of custom speech - Speech service
3
3
titleSuffix: Azure AI services
4
4
description: Custom speech provides base models for training and lets you create custom models from your data. This article describes the timelines for models and for endpoints that use these models.
5
-
author: heikora
6
-
manager: dongli
5
+
author: eric-urban
6
+
manager: nitinme
7
+
ms.author: eur
7
8
ms.service: azure-ai-speech
8
9
ms.topic: how-to
9
-
ms.date: 1/19/2024
10
-
ms.author: heikora
10
+
ms.date: 9/19/2024
11
+
ms.reviewer: heikora
11
12
zone_pivot_groups: speech-studio-cli-rest
13
+
#Customer intent: As a developer, I want to understand the lifecycle of custom speech models and endpoints so that I can plan for the expiration of my models.
0 commit comments