You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/how-to/professional-voice/create-consent/rest.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@
11
11
12
12
With the professional voice feature, it's required that every voice be created with explicit consent from the user. A recorded statement from the user is required acknowledging that the customer (Azure AI Speech resource owner) will create and use their voice.
13
13
14
-
To add voice talent consent to the professional voice project, you get the prerecorded consent audio file from a publicly accessible URL (`Consents_Create`) or upload the audio file (`Consents_Post`). In this article, you add consent from a URL.
14
+
To add voice talent consent to the professional voice project, you get the prerecorded consent audio file from a publicly accessible URL ([Consents_Create](/rest/api/speechapi/consents/create)) or upload the audio file ([Consents_Post](/rest/api/speechapi/consents/post)). In this article, you add consent from a URL.
15
15
16
16
## Consent statement
17
17
@@ -25,15 +25,15 @@ You can get the consent statement text for each locale from the text to speech G
25
25
26
26
## Add consent from a URL
27
27
28
-
To add consent to a professional voice project from the URL of an audio file, use the `Consents_Create` operation of the custom voice API. Construct the request body according to the following instructions:
28
+
To add consent to a professional voice project from the URL of an audio file, use the [Consents_Create](/rest/api/speechapi/consents/create) operation of the custom voice API. Construct the request body according to the following instructions:
29
29
30
30
- Set the required `projectId` property. See [create a project](../../../../professional-voice-create-project.md).
31
31
- Set the required `voiceTalentName` property. The voice talent name can't be changed later.
32
32
- Set the required `companyName` property. The company name can't be changed later.
33
33
- Set the required `audioUrl` property. The URL of the voice talent consent audio file. Use a URI with the [shared access signatures (SAS)](/azure/storage/common/storage-sas-overview) token.
34
34
- Set the required `locale` property. This should be the locale of the consent. The locale can't be changed later. You can find the text to speech locale list [here](/azure/ai-services/speech-service/language-support?tabs=tts).
35
35
36
-
Make an HTTP PUT request using the URI as shown in the following `Consents_Create` example.
36
+
Make an HTTP PUT request using the URI as shown in the following [Consents_Create](/rest/api/speechapi/consents/create) example.
37
37
- Replace `YourResourceKey` with your Speech resource key.
38
38
- Replace `YourResourceRegion` with your Speech resource region.
39
39
- Replace `JessicaConsentId` with a consent ID of your choice. The case sensitive ID will be used in the consent's URI and can't be changed later.
@@ -65,7 +65,7 @@ You should receive a response body in the following format:
65
65
}
66
66
```
67
67
68
-
The response header contains the `Operation-Location` property. Use this URI to get details about the `Consents_Create` operation. Here's an example of the response header:
68
+
The response header contains the `Operation-Location` property. Use this URI to get details about the [Consents_Create](/rest/api/speechapi/consents/create) operation. Here's an example of the response header:
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/how-to/professional-voice/create-project/rest.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,12 +15,12 @@ Each project is specific to a country/region and language, and the gender of the
15
15
16
16
## Create a project
17
17
18
-
To create a professional voice project, use the `Projects_Create` operation of the custom voice API. Construct the request body according to the following instructions:
18
+
To create a professional voice project, use the [Projects_Create](/rest/api/speechapi/projects/create) operation of the custom voice API. Construct the request body according to the following instructions:
19
19
20
20
- Set the required `kind` property to `ProfessionalVoice`. The kind can't be changed later.
21
21
- Optionally, set the `description` property for the project description. The project description can be changed later.
22
22
23
-
Make an HTTP PUT request using the URI as shown in the following `Projects_Create` example.
23
+
Make an HTTP PUT request using the URI as shown in the following [Projects_Create](/rest/api/speechapi/projects/create) example.
24
24
- Replace `YourResourceKey` with your Speech resource key.
25
25
- Replace `YourResourceRegion` with your Speech resource region.
26
26
- Replace `ProjectId` with a project ID of your choice. The case sensitive ID must be unique within your Speech resource. The ID will be used in the project's URI and can't be changed later.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/how-to/professional-voice/create-training-set/rest.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,14 +15,14 @@ In this article, you [create a training set](#create-a-training-set) and get its
15
15
16
16
## Create a training set
17
17
18
-
To create a training set, use the `TrainingSets_Create` operation of the custom voice API. Construct the request body according to the following instructions:
18
+
To create a training set, use the [TrainingSets_Create](/rest/api/speechapi/training-sets/create) operation of the custom voice API. Construct the request body according to the following instructions:
19
19
20
20
- Set the required `projectId` property. See [create a project](../../../../professional-voice-create-project.md).
21
21
- Set the required `voiceKind` property to `Male` or `Female`. The kind can't be changed later.
22
22
- Set the required `locale` property. This should be the locale of the training set data. The locale of the training set should be the same as the locale of the [consent statement](../../../../professional-voice-create-consent.md). The locale can't be changed later. You can find the text to speech locale list [here](/azure/ai-services/speech-service/language-support?tabs=tts).
23
23
- Optionally, set the `description` property for the training set description. The training set description can be changed later.
24
24
25
-
Make an HTTP PUT request using the URI as shown in the following `TrainingSets_Create` example.
25
+
Make an HTTP PUT request using the URI as shown in the following [TrainingSets_Create](/rest/api/speechapi/training-sets/create) example.
26
26
- Replace `YourResourceKey` with your Speech resource key.
27
27
- Replace `YourResourceRegion` with your Speech resource region.
28
28
- Replace `JessicaTrainingSetId` with a training set ID of your choice. The case sensitive ID will be used in the training set's URI and can't be changed later.
@@ -53,7 +53,7 @@ You should receive a response body in the following format:
53
53
54
54
## Upload training set data
55
55
56
-
To upload a training set of audio and scripts, use the `TrainingSets_UploadData` operation of the custom voice API.
56
+
To upload a training set of audio and scripts, use the [TrainingSets_UploadData](/rest/api/speechapi/training-sets/upload-data) operation of the custom voice API.
57
57
58
58
Before calling this API, please store recording and script files in Azure Blob. In the example below, recording files are https://contoso.blob.core.windows.net/voicecontainer/jessica300/*.wav, script files are
The response header contains the `Operation-Location` property. Use this URI to get details about the `TrainingSets_UploadData` operation. Here's an example of the response header:
98
+
The response header contains the `Operation-Location` property. Use this URI to get details about the [TrainingSets_UploadData](/rest/api/speechapi/training-sets/upload-data) operation. Here's an example of the response header:
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/how-to/professional-voice/deploy-endpoint/rest.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,13 +16,13 @@ After you've successfully created and [trained](../../../../professional-voice-t
16
16
17
17
## Add a deployment endpoint
18
18
19
-
To create an endpoint, use the `Endpoints_Create` operation of the custom voice API. Construct the request body according to the following instructions:
19
+
To create an endpoint, use the [Endpoints_Create](/rest/api/speechapi/endpoints/create) operation of the custom voice API. Construct the request body according to the following instructions:
20
20
21
21
- Set the required `projectId` property. See [create a project](../../../../professional-voice-create-project.md).
22
22
- Set the required `modelId` property. See [train a voice model](../../../../professional-voice-train-voice.md).
23
23
- Set the required `description` property. The description can be changed later.
24
24
25
-
Make an HTTP PUT request using the URI as shown in the following `Endpoints_Create` example.
25
+
Make an HTTP PUT request using the URI as shown in the following [Endpoints_Create](/rest/api/speechapi/endpoints/create) example.
26
26
- Replace `YourResourceKey` with your Speech resource key.
27
27
- Replace `YourResourceRegion` with your Speech resource region.
28
28
- Replace `EndpointId` with an endpoint ID of your choice. The ID must be a GUID and must be unique within your Speech resource. The ID will be used in the project's URI and can't be changed later.
@@ -52,7 +52,7 @@ You should receive a response body in the following format:
52
52
}
53
53
```
54
54
55
-
The response header contains the `Operation-Location` property. Use this URI to get details about the `Endpoints_Create` operation. Here's an example of the response header:
55
+
The response header contains the `Operation-Location` property. Use this URI to get details about the [Endpoints_Create](/rest/api/speechapi/endpoints/create) operation. Here's an example of the response header:
@@ -83,9 +83,9 @@ To use a custom voice via [Speech Synthesis Markup Language (SSML)](../../../../
83
83
84
84
You can suspend or resume an endpoint, to limit spend and conserve resources that aren't in use. You won't be charged while the endpoint is suspended. When you resume an endpoint, you can continue to use the same endpoint URL in your application to synthesize speech.
85
85
86
-
To suspend an endpoint, use the `Endpoints_Suspend` operation of the custom voice API.
86
+
To suspend an endpoint, use the [Endpoints_Suspend](/rest/api/speechapi/endpoints/suspend) operation of the custom voice API.
87
87
88
-
Make an HTTP POST request using the URI as shown in the following `Endpoints_Suspend` example.
88
+
Make an HTTP POST request using the URI as shown in the following [Endpoints_Suspend](/rest/api/speechapi/endpoints/suspend) example.
89
89
- Replace `YourResourceKey` with your Speech resource key.
90
90
- Replace `YourResourceRegion` with your Speech resource region.
91
91
- Replace `YourEndpointId` with the endpoint ID that you received when you created the endpoint.
@@ -113,9 +113,9 @@ You should receive a response body in the following format:
113
113
114
114
## Resume an endpoint
115
115
116
-
To suspend an endpoint, use the `Endpoints_Resume` operation of the custom voice API.
116
+
To suspend an endpoint, use the [Endpoints_Resume](/rest/api/speechapi/endpoints/resume) operation of the custom voice API.
117
117
118
-
Make an HTTP POST request using the URI as shown in the following `Endpoints_Resume` example.
118
+
Make an HTTP POST request using the URI as shown in the following [Endpoints_Resume](/rest/api/speechapi/endpoints/resume) example.
119
119
- Replace `YourResourceKey` with your Speech resource key.
120
120
- Replace `YourResourceRegion` with your Speech resource region.
121
121
- Replace `YourEndpointId` with the endpoint ID that you received when you created the endpoint.
@@ -143,9 +143,9 @@ You should receive a response body in the following format:
143
143
144
144
## Delete an endpoint
145
145
146
-
To delete an endpoint, use the `Endpoints_Delete` operation of the custom voice API.
146
+
To delete an endpoint, use the [Endpoints_Delete](/rest/api/speechapi/endpoints/delete) operation of the custom voice API.
147
147
148
-
Make an HTTP DELETE request using the URI as shown in the following `Endpoints_Delete` example.
148
+
Make an HTTP DELETE request using the URI as shown in the following [Endpoints_Delete](/rest/api/speechapi/endpoints/delete) example.
149
149
- Replace `YourResourceKey` with your Speech resource key.
150
150
- Replace `YourResourceRegion` with your Speech resource region.
151
151
- Replace `YourEndpointId` with the endpoint ID that you received when you created the endpoint.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/how-to/professional-voice/train-voice/rest.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ The language of the training data must be one of the [languages that are support
40
40
41
41
# [Neural](#tab/neural)
42
42
43
-
To create a neural voice, use the `Models_Create` operation of the custom voice API. Construct the request body according to the following instructions:
43
+
To create a neural voice, use the [Models_Create](/rest/api/speechapi/models/create) operation of the custom voice API. Construct the request body according to the following instructions:
44
44
45
45
- Set the required `projectId` property. See [create a project](../../../../professional-voice-create-project.md).
46
46
- Set the required `consentId` property. See [add voice talent consent](../../../../professional-voice-create-consent.md).
@@ -49,7 +49,7 @@ To create a neural voice, use the `Models_Create` operation of the custom voice
49
49
- Set the required `voiceName` property. The voice name must end with "Neural" and can't be changed later. Choose a name carefully. The voice name is used in your [speech synthesis request](../../../../professional-voice-deploy-endpoint.md#use-your-custom-voice) by the SDK and SSML input. Only letters, numbers, and a few punctuation characters are allowed. Use different names for different neural voice models.
50
50
- Optionally, set the `description` property for the voice description. The voice description can be changed later.
51
51
52
-
Make an HTTP PUT request using the URI as shown in the following `Models_Create` example.
52
+
Make an HTTP PUT request using the URI as shown in the following [Models_Create](/rest/api/speechapi/models/create) example.
53
53
- Replace `YourResourceKey` with your Speech resource key.
54
54
- Replace `YourResourceRegion` with your Speech resource region.
55
55
- Replace `JessicaModelId` with a model ID of your choice. The case sensitive ID will be used in the model's URI and can't be changed later.
@@ -93,7 +93,7 @@ You should receive a response body in the following format:
93
93
94
94
# [Neural - cross lingual](#tab/crosslingual)
95
95
96
-
To create a cross lingual neural voice, use the `Models_Create` operation of the custom voice API. Construct the request body according to the following instructions:
96
+
To create a cross lingual neural voice, use the [Models_Create](/rest/api/speechapi/models/create) operation of the custom voice API. Construct the request body according to the following instructions:
97
97
98
98
- Set the required `projectId` property. See [create a project](../../../../professional-voice-create-project.md).
99
99
- Set the required `consentId` property. See [add voice talent consent](../../../../professional-voice-create-consent.md).
@@ -103,7 +103,7 @@ To create a cross lingual neural voice, use the `Models_Create` operation of the
103
103
- Set the required `locale` property for the language that your voice speaks. The voice speaks a different language from your training data. You can specify only one target language for a voice model.
104
104
- Optionally, set the `description` property for the voice description. The voice description can be changed later.
105
105
106
-
Make an HTTP PUT request using the URI as shown in the following `Models_Create` example.
106
+
Make an HTTP PUT request using the URI as shown in the following [Models_Create](/rest/api/speechapi/models/create) example.
107
107
- Replace `YourResourceKey` with your Speech resource key.
108
108
- Replace `YourResourceRegion` with your Speech resource region.
109
109
- Replace `JessicaModelId` with a model ID of your choice. The case sensitive ID will be used in the model's URI and can't be changed later.
@@ -146,7 +146,7 @@ You should receive a response body in the following format:
146
146
147
147
# [Neural - multi style](#tab/multistyle)
148
148
149
-
To create a multi-style neural voice, use the `Models_Create` operation of the custom voice API. Construct the request body according to the following instructions:
149
+
To create a multi-style neural voice, use the [Models_Create](/rest/api/speechapi/models/create) operation of the custom voice API. Construct the request body according to the following instructions:
150
150
151
151
- Set the required `projectId` property. See [create a project](../../../../professional-voice-create-project.md).
152
152
- Set the required `consentId` property. See [add voice talent consent](../../../../professional-voice-create-consent.md).
@@ -161,7 +161,7 @@ To create a multi-style neural voice, use the `Models_Create` operation of the c
161
161
- For each dictionary value, specify the ID of a training set that you [already created](../../../../professional-voice-create-training-set.md#add-a-professional-voice-training-dataset) for the same voice model. The training set must contain at least 100 utterances for each style.
162
162
- Optionally, set the `description` property for the voice description. The voice description can be changed later.
163
163
164
-
Make an HTTP PUT request using the URI as shown in the following `Models_Create` example.
164
+
Make an HTTP PUT request using the URI as shown in the following [Models_Create](/rest/api/speechapi/models/create) example.
165
165
- Replace `YourResourceKey` with your Speech resource key.
166
166
- Replace `YourResourceRegion` with your Speech resource region.
167
167
- Replace `JessicaModelId` with a model ID of your choice. The case sensitive ID will be used in the model's URI and can't be changed later.
@@ -237,9 +237,9 @@ The following table summarizes the different preset styles according to differen
237
237
238
238
## Get training status
239
239
240
-
To get the training status of a voice model, use the `Models_Get` operation of the custom voice API. Construct the request URI according to the following instructions:
240
+
To get the training status of a voice model, use the [Models_Get](/rest/api/speechapi/models/get) operation of the custom voice API. Construct the request URI according to the following instructions:
241
241
242
-
Make an HTTP GET request using the URI as shown in the following `Models_Get` example.
242
+
Make an HTTP GET request using the URI as shown in the following [Models_Get](/rest/api/speechapi/models/get) example.
243
243
- Replace `YourResourceKey` with your Speech resource key.
244
244
- Replace `YourResourceRegion` with your Speech resource region.
245
245
- Replace `JessicaModelId` if you specified a different model ID in the previous step.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/release-notes/release-notes-tts.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
author: eric-urban
3
3
ms.service: azure-ai-speech
4
4
ms.topic: include
5
-
ms.date: 1/18/2024
5
+
ms.date: 2/7/2024
6
6
ms.author: eur
7
7
ms.custom: references_regions
8
8
---
@@ -16,6 +16,10 @@ The Azure AI Speech service supports OpenAI text to speech voices in the followi
16
16
> [!NOTE]
17
17
> OpenAI text to speech voices are also available in [Azure OpenAI Service](../../../openai/reference.md#text-to-speech).
18
18
19
+
#### Personal voice
20
+
21
+
The personal voice feature now supports `DragonLatestNeural` and `PhoenixLatestNeural` models. These new models enhance the naturalness of synthesized voices, better resembling the speech characteristics of the voice in the prompt. For more details, refer to [Integrate personal voice in your application](../../personal-voice-how-to-use.md#integrate-personal-voice-in-your-application).
0 commit comments