You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/how-to-audio-content-creation.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -141,7 +141,7 @@ After you review your audio output and are satisfied with your tuning and adjust
141
141
142
142
If you lose access permission to your Bring Your Own Storage (BYOS), you can't view, create, edit, or delete files. To resume your access, you need to remove the current storage and reconfigure the BYOS in the [Azure portal](https://portal.azure.com/#allservices). To learn more about how to configure BYOS, see [Mount Azure Storage as a local share in App Service](/azure/app-service/configure-connect-to-azure-storage?pivots=container-linux&tabs=portal).
143
143
144
-
After configuring the BYOS permission, you need to configure anonymous public read access for related containers and blobs. Otherwise, blob data isn't available for public access and your lexicon file in the blob is inaccessible. By default, a container’s public access setting is disabled. To grant anonymous users read access to a container and its blobs, first set **Allow Blob public access** to **Enabled** to allow public access for the storage account, then set the container's (named **acc-public-files**) public access level (**anonymous read access for blobs only**). To learn more about how to configure anonymous public read access, see [Configure anonymous public read access for containers and blobs](/azure/storage/blobs/anonymous-read-access-configure?tabs=portal).
144
+
After configuring the BYOS permission, you need to configure anonymous public read access for related containers and blobs. Otherwise, blob data isn't available for public access and your lexicon file in the blob is inaccessible. By default, a container’s public access setting is disabled. To grant anonymous users read access to a container and its blobs, first set **Allow Blob anonymous access** to **Enabled** to allow public access for the storage account, then set the container's (named **acc-public-files**) public access level (**anonymous read access for blobs only**). To learn more about how to configure anonymous public read access, see [Configure anonymous public read access for containers and blobs](/azure/storage/blobs/anonymous-read-access-configure?tabs=portal).
With custom speech, you can enhance speech recognition accuracy for your applications by using a custom model for real-time speech to text, speech translation, and batch transcription.
18
18
19
+
> [!TIP]
20
+
> Bring your custom speech models from [Speech Studio](https://speech.microsoft.com) to the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs). In Azure AI Foundry portal, you can pick up where you left off by connecting to your existing Speech resource. For more information about connecting to an existing Speech resource, see [Connect to an existing Speech resource](../../ai-studio/ai-services/how-to/connect-ai-services.md#connect-azure-ai-services-after-you-create-a-project).
21
+
19
22
You create a custom speech model by fine-tuning an Azure AI Speech base model with your own data. You can upload your data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint.
20
23
21
24
This article shows you how to use fine-tuning to create a custom speech model. For more information about custom speech, see the [custom speech overview](./custom-speech-overview.md) documentation.
@@ -63,7 +66,7 @@ After fine-tuning, you can access your custom speech models and deployments from
63
66
64
67
::: zone pivot="speech-studio"
65
68
66
-
To create a custom speech project in [Speech Studio](https://aka.ms/speechstudio/customspeech), follow these steps:
69
+
After you create a custom speech project, you can access your custom speech models and deployments from the **Custom speech** page.
67
70
68
71
1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech).
69
72
1. Select the subscription and Speech resource to work with.
@@ -78,6 +81,46 @@ Select the new project by name or select **Go to project**. Then you should see
78
81
79
82
::: zone-end
80
83
84
+
85
+
## Get the project ID for the REST API
86
+
87
+
::: zone pivot="ai-foundry-portal"
88
+
89
+
When you use the speech to text REST API for custom speech, you need to set the `project` property to the ID of your custom speech project. You need to set the `project` property so that you can manage fine-tuning in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
90
+
91
+
> [!IMPORTANT]
92
+
> The project ID for custom speech isn't the same as the ID of the Azure AI Foundry project.
93
+
94
+
You can find the project ID in the URL after you select or start fine-tuning a custom speech model.
95
+
96
+
1. Sign in to the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
97
+
1. Select **Fine-tuning** from the left pane.
98
+
1. Select **AI Service fine-tuning**.
99
+
1. Select the custom model that you want to check from the **Model name** column.
100
+
1. Inspect the URL in your browser. The project ID is part of the URL. For example, the project ID is `00001111-aaaa-2222-bbbb-3333cccc4444` in the following URL:
When you use the speech to text REST API for custom speech, you need to set the `project` property to the ID of your custom speech project. You need to set the `project` property so that you can manage fine-tuning in the [Speech Studio](https://aka.ms/speechstudio/customspeech).
111
+
112
+
To get the project ID for a custom speech project in [Speech Studio](https://aka.ms/speechstudio/customspeech):
113
+
114
+
1. Sign in to the [Speech Studio](https://aka.ms/speechstudio/customspeech) and select the **Custom speech** tile.
115
+
1. Select your custom speech project.
116
+
1. Inspect the URL in your browser. The project ID is part of the URL. For example, the project ID is `00001111-aaaa-2222-bbbb-3333cccc4444` in the following URL:
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/how-to-custom-speech-deploy-model.md
+20-2Lines changed: 20 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,6 +26,9 @@ You can deploy an endpoint for a base or custom model, and then [update](#change
26
26
27
27
## Add a deployment endpoint
28
28
29
+
> [!TIP]
30
+
> Bring your custom speech models from [Speech Studio](https://speech.microsoft.com) to the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs). In Azure AI Foundry portal, you can pick up where you left off by connecting to your existing Speech resource. For more information about connecting to an existing Speech resource, see [Connect to an existing Speech resource](../../ai-studio/ai-services/how-to/connect-ai-services.md#connect-azure-ai-services-after-you-create-a-project).
31
+
29
32
::: zone pivot="ai-foundry-portal"
30
33
31
34
1. Sign in to the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
@@ -80,9 +83,11 @@ Select the endpoint link to view information specific to it, such as the endpoin
80
83
81
84
::: zone pivot="speech-cli"
82
85
86
+
Before proceeding, make sure that you have the [Speech CLI](./spx-basics.md) installed and configured.
87
+
83
88
To create an endpoint and deploy a model, use the `spx csr endpoint create` command. Construct the request parameters according to the following instructions:
84
89
85
-
- Set the `project` property to the ID of an existing project. This property is recommended so that you can also view and manage the endpoint in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs). You can run the `spx csr project list` command to get available projects.
90
+
- Set the `project` property to the ID of an existing project. The `project`property is recommended so that you can also manage fine-tuning for custom speech in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs). To get the project ID, see [Get the project ID for the REST API](./how-to-custom-speech-create-project.md#get-the-project-id-for-the-rest-api) documentation.
86
91
- Set the required `model` property to the ID of the model that you want deployed to the endpoint.
87
92
- Set the required `language` property. The endpoint locale must match the locale of the model. The locale can't be changed later. The Speech CLI `language` property corresponds to the `locale` property in the JSON request and response.
88
93
- Set the required `name` property. This is the name that is displayed in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs). The Speech CLI `name` property corresponds to the `displayName` property in the JSON request and response.
@@ -94,6 +99,9 @@ Here's an example Speech CLI command to create an endpoint and deploy a model:
94
99
spx csr endpoint create --api-version v3.2 --project YourProjectId --model YourModelId --name "My Endpoint" --description "My Endpoint Description" --language "en-US"
95
100
```
96
101
102
+
> [!IMPORTANT]
103
+
> You must set `--api-version v3.2`. The Speech CLI uses the REST API, but doesn't yet support versions later than `v3.2`.
104
+
97
105
You should receive a response body in the following format:
98
106
99
107
```json
@@ -140,7 +148,7 @@ spx help csr endpoint
140
148
141
149
To create an endpoint and deploy a model, use the [Endpoints_Create](/rest/api/speechtotext/endpoints/create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
142
150
143
-
- Set the `project` property to the URI of an existing project. This property is recommended so that you can also view and manage the endpoint in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs). You can make a [Projects_List](/rest/api/speechtotext/projects/list) request to get available projects.
151
+
- Set the `project` property to the URI of an existing project. This property is recommended so that you can also view and manage the endpoint in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs). To get the project ID, see [Get the project ID for the REST API](./how-to-custom-speech-create-project.md#get-the-project-id-for-the-rest-api) documentation.
144
152
- Set the required `model` property to the URI of the model that you want deployed to the endpoint.
145
153
- Set the required `locale` property. The endpoint locale must match the locale of the model. The locale can't be changed later.
146
154
- Set the required `displayName` property. This is the name that is displayed in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
@@ -225,6 +233,8 @@ To use a new model and redeploy the custom endpoint:
225
233
226
234
::: zone pivot="speech-cli"
227
235
236
+
Before proceeding, make sure that you have the [Speech CLI](./spx-basics.md) installed and configured.
237
+
228
238
To redeploy the custom endpoint with a new model, use the `spx csr model update` command. Construct the request parameters according to the following instructions:
229
239
230
240
- Set the required `endpoint` property to the ID of the endpoint that you want deployed.
@@ -236,6 +246,9 @@ Here's an example Speech CLI command that redeploys the custom endpoint with a n
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/how-to-custom-speech-evaluate-data.md
+15-2Lines changed: 15 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,6 +22,9 @@ In this article, you learn how to quantitatively measure and improve the accurac
22
22
23
23
## Create a test
24
24
25
+
> [!TIP]
26
+
> Bring your custom speech models from [Speech Studio](https://speech.microsoft.com) to the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs). In Azure AI Foundry portal, you can pick up where you left off by connecting to your existing Speech resource. For more information about connecting to an existing Speech resource, see [Connect to an existing Speech resource](../../ai-studio/ai-services/how-to/connect-ai-services.md#connect-azure-ai-services-after-you-create-a-project).
27
+
25
28
You can test the accuracy of your custom model by creating a test. A test requires a collection of audio files and their corresponding transcriptions. You can compare a custom model's accuracy with a speech to text base model or another custom model. After you [get](#get-test-results) the test results, [evaluate the word error rate (WER)](#evaluate-word-error-rate-wer) compared to speech recognition results.
26
29
27
30
After you [upload training and testing datasets](how-to-custom-speech-upload-data.md), you can create a test.
@@ -74,9 +77,11 @@ Follow these steps to create an accuracy test:
74
77
75
78
::: zone pivot="speech-cli"
76
79
80
+
Before proceeding, make sure that you have the [Speech CLI](./spx-basics.md) installed and configured.
81
+
77
82
To create a test, use the `spx csr evaluation create` command. Construct the request parameters according to the following instructions:
78
83
79
-
- Set the `project` property to the ID of an existing project. This property is recommended so that you can also view the test in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs). You can run the `spx csr project list` command to get available projects.
84
+
- Set the `project` property to the ID of an existing project. The `project`property is recommended so that you can also manage fine-tuning for custom speech in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs). To get the project ID, see [Get the project ID for the REST API](./how-to-custom-speech-create-project.md#get-the-project-id-for-the-rest-api) documentation.
80
85
- Set the required `model1` property to the ID of a model that you want to test.
81
86
- Set the required `model2` property to the ID of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`.
82
87
- Set the required `dataset` property to the ID of a dataset that you want to use for the test.
@@ -89,6 +94,9 @@ Here's an example Speech CLI command that creates a test:
> You must set `--api-version v3.2`. The Speech CLI uses the REST API, but doesn't yet support versions later than `v3.2`.
99
+
92
100
You should receive a response body in the following format:
93
101
94
102
```json
@@ -159,7 +167,7 @@ spx help csr evaluation
159
167
160
168
To create a test, use the [Evaluations_Create](/rest/api/speechtotext/evaluations/create) operation of the [Speech to text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
161
169
162
-
- Set the `project` property to the URI of an existing project. This property is recommended so that you can also view the test in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs). You can make a [Projects_List](/rest/api/speechtotext/projects/list) request to get available projects.
170
+
- Set the `project` property to the ID of an existing project. The `project`property is recommended so that you can also manage fine-tuning for custom speech in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs). To get the project ID, see [Get the project ID for the REST API](./how-to-custom-speech-create-project.md#get-the-project-id-for-the-rest-api) documentation.
163
171
- Set the `testingKind` property to `Evaluation` within `customProperties`. If you don't specify `Evaluation`, the test is treated as a quality inspection test. Whether the `testingKind` property is set to `Evaluation` or `Inspection`, or not set, you can access the accuracy scores via the API, but not in the [Azure AI Foundry portal](https://ai.azure.com/?cid=learnDocs).
164
172
- Set the required `model1` property to the URI of a model that you want to test.
165
173
- Set the required `model2` property to the URI of another model that you want to test. If you don't want to compare two models, use the same model for both `model1` and `model2`.
@@ -277,6 +285,8 @@ This page lists all the utterances in your dataset and the recognition results,
277
285
278
286
::: zone pivot="speech-cli"
279
287
288
+
Before proceeding, make sure that you have the [Speech CLI](./spx-basics.md) installed and configured.
289
+
280
290
To get test results, use the `spx csr evaluation status` command. Construct the request parameters according to the following instructions:
281
291
282
292
- Set the required `evaluation` property to the ID of the evaluation that you want to get test results.
@@ -287,6 +297,9 @@ Here's an example Speech CLI command that gets test results:
287
297
spx csr evaluation status --api-version v3.2 --evaluation aaaabbbb-6666-cccc-7777-dddd8888eeee
288
298
```
289
299
300
+
> [!IMPORTANT]
301
+
> You must set `--api-version v3.2`. The Speech CLI uses the REST API, but doesn't yet support versions later than `v3.2`.
302
+
290
303
The word error rates and more details are returned in the response body.
291
304
292
305
You should receive a response body in the following format:
0 commit comments