You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/batch-transcription-create.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -177,7 +177,7 @@ Here are some property options that you can use to configure a transcription whe
177
177
|`languageIdentification`|Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md?tabs=language-identification).<br/><br/>If you set the `languageIdentification` property, then you must also set its enclosed `candidateLocales` property.|
178
178
|`languageIdentification.candidateLocales`|The candidate locales for language identification such as `"properties": { "languageIdentification": { "candidateLocales": ["en-US", "de-DE", "es-ES"]}}`. A minimum of 2 and a maximum of 10 candidate locales, including the main locale for the transcription, is supported.|
179
179
|`locale`|The locale of the batch transcription. This should match the expected locale of the audio data to transcribe. The locale can't be changed later.<br/><br/>This property is required.|
180
-
|`model`|You can set the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Using custom models](#using-custom-models) and [Using Whisper models](#using-whisper-models).|
180
+
|`model`|You can set the `model` property to use a specific base model or [custom speech](how-to-custom-speech-train-model.md) model. If you don't specify the `model`, the default base model for the locale is used. For more information, see [Using custom models](#using-custom-models) and [Using Whisper models](#using-whisper-models).|
181
181
|`profanityFilterMode`|Specifies how to handle profanity in recognition results. Accepted values are `None` to disable profanity filtering, `Masked` to replace profanity with asterisks, `Removed` to remove all profanity from the result, or `Tags` to add profanity tags. The default value is `Masked`. |
182
182
|`punctuationMode`|Specifies how to handle punctuation in recognition results. Accepted values are `None` to disable punctuation, `Dictated` to imply explicit (spoken) punctuation, `Automatic` to let the decoder deal with punctuation, or `DictatedAndAutomatic` to use dictated and automatic punctuation. The default value is `DictatedAndAutomatic`.<br/><br/>This property isn't applicable for Whisper models.|
183
183
|`timeToLive`|A duration after the transcription job is created, when the transcription results will be automatically deleted. The value is an ISO 8601 encoded duration. For example, specify `PT12H` for 12 hours. As an alternative, you can call [Transcriptions_Delete](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Delete) regularly after you retrieve the transcription results.|
@@ -200,7 +200,7 @@ spx help batch transcription create advanced
200
200
201
201
Batch transcription uses the default base model for the locale that you specify. You don't need to set any properties to use the default base model.
202
202
203
-
Optionally, you can modify the previous [create transcription example](#create-a-batch-transcription) by setting the `model` property to use a specific base model or [Custom Speech](how-to-custom-speech-train-model.md) model.
203
+
Optionally, you can modify the previous [create transcription example](#create-a-batch-transcription) by setting the `model` property to use a specific base model or [custom speech](how-to-custom-speech-train-model.md) model.
To use a Custom Speech model for batch transcription, you need the model's URI. You can retrieve the model location when you create or get a model. The top-level `self` property in the response body is the model's URI. For an example, see the JSON response example in the [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model) guide.
234
+
To use a custom speech model for batch transcription, you need the model's URI. You can retrieve the model location when you create or get a model. The top-level `self` property in the response body is the model's URI. For an example, see the JSON response example in the [Create a model](how-to-custom-speech-train-model.md?pivots=rest-api#create-a-model) guide.
235
235
236
236
> [!TIP]
237
237
> A [hosted deployment endpoint](how-to-custom-speech-deploy-model.md) isn't required to use custom speech with the batch transcription service. You can conserve resources if the [custom speech model](how-to-custom-speech-train-model.md) is only used for batch transcription.
238
238
239
-
Batch transcription requests for expired models fail with a 4xx error. You want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [Custom Speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
239
+
Batch transcription requests for expired models fail with a 4xx error. You want to set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. For more information, see [Choose a model](how-to-custom-speech-create-project.md#choose-your-model) and [custom speech model lifecycle](how-to-custom-speech-model-and-endpoint-lifecycle.md).
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/bring-your-own-storage-speech-resource-speech-to-text.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,7 +24,7 @@ This article explains in depth how to use a BYOS-enabled Speech resource in all
24
24
25
25
## Data storage
26
26
27
-
When using BYOS, the Speech service doesn't keep any customer artifacts after the data processing (transcription, model training, model testing) is complete. However, some metadata that isn't derived from the user content is stored within Speech service premises. For example, in the Custom speech scenario, the Service keeps certain information about the custom endpoints, like which models they use.
27
+
When using BYOS, the Speech service doesn't keep any customer artifacts after the data processing (transcription, model training, model testing) is complete. However, some metadata that isn't derived from the user content is stored within Speech service premises. For example, in the custom speech scenario, the Service keeps certain information about the custom endpoints, like which models they use.
28
28
29
29
BYOS-associated Storage account stores the following data:
30
30
@@ -123,23 +123,23 @@ URL of this format ensures that only Microsoft Entra identities (users, service
123
123
124
124
## Custom speech
125
125
126
-
With Custom speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for real-time speech to text, speech translation, and batch transcription. For more information, see the [Custom speech overview](custom-speech-overview.md).
126
+
With custom speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for real-time speech to text, speech translation, and batch transcription. For more information, see the [custom speech overview](custom-speech-overview.md).
127
127
128
-
There's nothing specific about how you use Custom speech with BYOS-enabled Speech resource. The only difference is where all custom model related data, which Speech service collects and produces for you, is stored. The data is stored in the following Blob containers of BYOS-associated Storage account:
128
+
There's nothing specific about how you use custom speech with BYOS-enabled Speech resource. The only difference is where all custom model related data, which Speech service collects and produces for you, is stored. The data is stored in the following Blob containers of BYOS-associated Storage account:
129
129
130
-
-`customspeech-models` - Location of Custom speech models
131
-
-`customspeech-artifacts` - Location of all other Custom speech related data
130
+
-`customspeech-models` - Location of custom speech models
131
+
-`customspeech-artifacts` - Location of all other custom speech related data
132
132
133
133
The Blob container structure is provided for your information only and subject to change without a notice.
134
134
135
135
> [!CAUTION]
136
-
> Speech service relies on pre-defined Blob container paths and file names for Custom speech module to correctly function. Don't move, rename or in any way alter the contents of `customspeech-models` container and Custom speech related folders of `customspeech-artifacts` container.
136
+
> Speech service relies on pre-defined Blob container paths and file names for custom speech module to correctly function. Don't move, rename or in any way alter the contents of `customspeech-models` container and custom speech related folders of `customspeech-artifacts` container.
137
137
>
138
138
> Failure to do so very likely will result in hard to debug errors and may lead to the necessity of custom model retraining.
139
139
>
140
-
> Use standard tools, like REST API and Speech Studio to interact with the Custom speech related data. See details in [Custom speech section](custom-speech-overview.md).
140
+
> Use standard tools, like REST API and Speech Studio to interact with the custom speech related data. See details in [custom speech section](custom-speech-overview.md).
141
141
142
-
### Use of REST API with Custom speech
142
+
### Use of REST API with custom speech
143
143
144
144
[Speech to text REST API](rest-speech-to-text.md) fully supports BYOS-enabled Speech resources. However, because the data is now stored within the BYOS-enabled Storage account, requests like [Get Dataset Files](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Datasets_ListFiles) interact with the BYOS-associated Storage account Blob storage, instead of Speech service internal resources. It allows using the same REST API based code for both "regular" and BYOS-enabled Speech resources.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/call-center-overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ The Speech service works well with prebuilt models. However, you might want to f
45
45
46
46
| Speech customization | Description |
47
47
| -------------- | ----------- |
48
-
|[Custom Speech](./custom-speech-overview.md)| A speech to text feature used to evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. |
48
+
|[Custom speech](./custom-speech-overview.md)| A speech to text feature used to evaluate and improve the speech recognition accuracy of use-case specific entities (such as alpha-numeric customer, case, and contract IDs, license plates, and names). You can also train a custom model with your own product names and industry terminology. |
49
49
|[Custom neural voice](./custom-neural-voice.md)| A text to speech feature that lets you create a one-of-a-kind, customized, synthetic voice for your applications. |
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/custom-speech-overview.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
-
title: Custom Speech overview - Speech service
2
+
title: Custom speech overview - Speech service
3
3
titleSuffix: Azure AI services
4
-
description: Custom Speech is a set of online tools that allows you to evaluate and improve the speech to text accuracy for your applications, tools, and products.
4
+
description: Custom speech is a set of online tools that allows you to evaluate and improve the speech to text accuracy for your applications, tools, and products.
5
5
author: eric-urban
6
6
manager: nitinme
7
7
ms.service: azure-ai-speech
@@ -11,9 +11,9 @@ ms.author: eur
11
11
ms.custom: contperf-fy21q2, references_regions
12
12
---
13
13
14
-
# What is Custom Speech?
14
+
# What is custom speech?
15
15
16
-
With Custom Speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech to text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md).
16
+
With custom speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech to text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md).
17
17
18
18
Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing various common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works well in most speech recognition scenarios.
19
19
@@ -23,9 +23,9 @@ You can also train a model with structured text when the data follows a pattern,
23
23
24
24
## How does it work?
25
25
26
-
With Custom Speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint.
26
+
With custom speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint.
27
27
28
-

28
+

29
29
30
30
Here's more information about the sequence of steps shown in the previous diagram:
31
31
@@ -35,10 +35,10 @@ Here's more information about the sequence of steps shown in the previous diagra
35
35
1.[Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech to text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if more training is required.
36
36
1.[Train a model](how-to-custom-speech-train-model.md). Provide written transcripts and related text, along with the corresponding audio data. Testing a model before and after training is optional but recommended.
37
37
> [!NOTE]
38
-
> You pay for Custom Speech model usage and [endpoint hosting](how-to-custom-speech-deploy-model.md). You'll also be charged for custom speech model training if the base model was created on October 1, 2023 and later. You are not charged for training if the base model was created prior to October 2023. For more information, see [Azure AI Speech pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and the [Charge for adaptation section in the speech to text 3.2 migration guide](./migrate-v3-1-to-v3-2.md#charge-for-adaptation).
39
-
1.[Deploy a model](how-to-custom-speech-deploy-model.md). Once you're satisfied with the test results, deploy the model to a custom endpoint. Except for [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
38
+
> You pay for custom speech model usage and [endpoint hosting](how-to-custom-speech-deploy-model.md). You'll also be charged for custom speech model training if the base model was created on October 1, 2023 and later. You are not charged for training if the base model was created prior to October 2023. For more information, see [Azure AI Speech pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and the [Charge for adaptation section in the speech to text 3.2 migration guide](./migrate-v3-1-to-v3-2.md#charge-for-adaptation).
39
+
1.[Deploy a model](how-to-custom-speech-deploy-model.md). Once you're satisfied with the test results, deploy the model to a custom endpoint. Except for [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a custom speech model.
40
40
> [!TIP]
41
-
> A hosted deployment endpoint isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the custom speech model is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
41
+
> A hosted deployment endpoint isn't required to use custom speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the custom speech model is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/faq-stt.yml
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -105,7 +105,7 @@ sections:
105
105
If you submit each channel separately in their own file, you're charged for the audio duration of each file. If you submit a single file with the channels multiplexed together, you're charged for the duration of the single file. For more information about pricing, see the [Azure AI services pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
106
106
107
107
> [!IMPORTANT]
108
-
> If you have further privacy concerns that prevent you from using the Custom Speech service, contact one of the support channels.
108
+
> If you have further privacy concerns that prevent you from using the custom speech service, contact one of the support channels.
0 commit comments