You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/openai/assistants-reference-messages.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ Create a message.
36
36
37
37
|Name | Type | Required | Description |
38
38
|--- |--- |--- |--- |
39
-
|`role`| string | Required | The role of the entity that is creating the message. Can be `user` or `assistant`. `assistant` indicates the message is sent by an actual user and should be used in most cases to represent user-generated messages. `assistant` indicates the message is generated by the assistant. Use this value to insert messages from the assistant into the conversation. |
39
+
|`role`| string | Required | The role of the entity that is creating the message. Can be `user` or `assistant`. `user` indicates the message is sent by an actual user and should be used in most cases to represent user-generated messages. `assistant` indicates the message is generated by the assistant. Use this value to insert messages from the assistant into the conversation. |
40
40
|`content`| string | Required | The content of the message. |
41
41
|`file_ids`| array | Optional | A list of File IDs that the message should use. There can be a maximum of 10 files attached to a message. Useful for tools like retrieval and code_interpreter that can access and use files. |
42
42
|`metadata`| map | Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. |
@@ -371,7 +371,7 @@ Represents a message within a thread.
371
371
|`object`| string |The object type, which is always thread.message.|
372
372
|`created_at`| integer |The Unix timestamp (in seconds) for when the message was created.|
373
373
|`thread_id`| string |The thread ID that this message belongs to.|
374
-
|`role`| string |The entity that produced the message. One of user or assistant.|
374
+
|`role`| string |The entity that produced the message. One of `user` or `assistant`.|
375
375
|`content`| array |The content of the message in array of text and/or images.|
376
376
|`assistant_id`| string or null |If applicable, the ID of the assistant that authored this message.|
377
377
|`run_id`| string or null |If applicable, the ID of the run associated with the authoring of this message.|
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/how-to-lower-speech-synthesis-latency.md
+67Lines changed: 67 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -318,6 +318,73 @@ For Linux and Windows, `GStreamer` is required to enable this feature.
318
318
Refer [this instruction](how-to-use-codec-compressed-audio-input-streams.md) to install and configure `GStreamer` for Speech SDK.
319
319
For Android, iOS and macOS, no extra configuration is needed starting version 1.20.
320
320
321
+
## Text streaming
322
+
323
+
Text streaming allows real-time text processing for rapid audio generation. It's perfect for dynamic text vocalization, such as reading outputs from AI models like GPT in real-time. This feature minimizes latency and improves the fluidity and responsiveness of audio outputs, making it ideal for interactive applications, live events, and responsive AI-driven dialogues.
324
+
325
+
### How to use text streaming
326
+
327
+
To use the text streaming feature, connect to the websocket V2 endpoint: `wss://{region}.tts.speech.microsoft.com/cognitiveservices/websocket/v2`
328
+
329
+
::: zone pivot="programming-language-csharp"
330
+
331
+
See the sample code for setting the endpoint:
332
+
333
+
```csharp
334
+
// IMPORTANT: MUST use the websocket v2 endpoint
335
+
var ttsEndpoint = $"wss://{Environment.GetEnvironmentVariable("AZURE_TTS_REGION")}.tts.speech.microsoft.com/cognitiveservices/websocket/v2";
1.**Create a text stream request**: Use `SpeechSynthesisRequestInputType.TextStream` to initiate a text stream.
344
+
1.**Set global properties**: Adjust settings such as output format and voice name directly, as the feature handles partial text inputs and doesn't support SSML. Refer to the following sample code for instructions on how to set them. OpenAI text to speech voices aren't supported by the text streaming feature. See this [language table](language-support.md?tabs=tts#supported-languages) for full language support.
1. **Setglobalproperties**:Adjustsettingssuchasoutputformatandvoicenamedirectly, asthefeaturehandlespartialtextinputsanddoesn't support SSML. Refer to the following sample code for instructions on how to set them. OpenAI text to speech voices aren'tsupportedbythetextstreamingfeature. Seethis [languagetable](language-support.md?tabs=tts#supported-languages) forfulllanguagesupport.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/how-to/professional-voice/create-consent/rest.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ ms.custom: include
11
11
12
12
With the professional voice feature, it's required that every voice be created with explicit consent from the user. A recorded statement from the user is required acknowledging that the customer (Azure AI Speech resource owner) will create and use their voice.
13
13
14
-
To add voice talent consent to the professional voice project, you get the prerecorded consent audio file from a publicly accessible URL ([Consents_Create](/rest/api/speechapi/consents/create)) or upload the audio file ([Consents_Post](/rest/api/speechapi/consents/post)). In this article, you add consent from a URL.
14
+
To add voice talent consent to the professional voice project, you get the prerecorded consent audio file from a publicly accessible URL ([Consents_Create](/rest/api/aiservices/speechapi/consents/create)) or upload the audio file ([Consents_Post](/rest/api/aiservices/speechapi/consents/post)). In this article, you add consent from a URL.
15
15
16
16
## Consent statement
17
17
@@ -25,15 +25,15 @@ You can get the consent statement text for each locale from the text to speech G
25
25
26
26
## Add consent from a URL
27
27
28
-
To add consent to a professional voice project from the URL of an audio file, use the [Consents_Create](/rest/api/speechapi/consents/create) operation of the custom voice API. Construct the request body according to the following instructions:
28
+
To add consent to a professional voice project from the URL of an audio file, use the [Consents_Create](/rest/api/aiservices/speechapi/consents/create) operation of the custom voice API. Construct the request body according to the following instructions:
29
29
30
30
- Set the required `projectId` property. See [create a project](../../../../professional-voice-create-project.md).
31
31
- Set the required `voiceTalentName` property. The voice talent name can't be changed later.
32
32
- Set the required `companyName` property. The company name can't be changed later.
33
33
- Set the required `audioUrl` property. The URL of the voice talent consent audio file. Use a URI with the [shared access signatures (SAS)](/azure/storage/common/storage-sas-overview) token.
34
34
- Set the required `locale` property. This should be the locale of the consent. The locale can't be changed later. You can find the text to speech locale list [here](/azure/ai-services/speech-service/language-support?tabs=tts).
35
35
36
-
Make an HTTP PUT request using the URI as shown in the following [Consents_Create](/rest/api/speechapi/consents/create) example.
36
+
Make an HTTP PUT request using the URI as shown in the following [Consents_Create](/rest/api/aiservices/speechapi/consents/create) example.
37
37
- Replace `YourResourceKey` with your Speech resource key.
38
38
- Replace `YourResourceRegion` with your Speech resource region.
39
39
- Replace `JessicaConsentId` with a consent ID of your choice. The case sensitive ID will be used in the consent's URI and can't be changed later.
You should receive a response body in the following format:
@@ -65,10 +65,10 @@ You should receive a response body in the following format:
65
65
}
66
66
```
67
67
68
-
The response header contains the `Operation-Location` property. Use this URI to get details about the [Consents_Create](/rest/api/speechapi/consents/create) operation. Here's an example of the response header:
68
+
The response header contains the `Operation-Location` property. Use this URI to get details about the [Consents_Create](/rest/api/aiservices/speechapi/consents/create) operation. Here's an example of the response header:
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/how-to/professional-voice/create-project/rest.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,12 +15,12 @@ Each project is specific to a country/region and language, and the gender of the
15
15
16
16
## Create a project
17
17
18
-
To create a professional voice project, use the [Projects_Create](/rest/api/speechapi/projects/create) operation of the custom voice API. Construct the request body according to the following instructions:
18
+
To create a professional voice project, use the [Projects_Create](/rest/api/aiservices/speechapi/projects/create) operation of the custom voice API. Construct the request body according to the following instructions:
19
19
20
20
- Set the required `kind` property to `ProfessionalVoice`. The kind can't be changed later.
21
21
- Optionally, set the `description` property for the project description. The project description can be changed later.
22
22
23
-
Make an HTTP PUT request using the URI as shown in the following [Projects_Create](/rest/api/speechapi/projects/create) example.
23
+
Make an HTTP PUT request using the URI as shown in the following [Projects_Create](/rest/api/aiservices/speechapi/projects/create) example.
24
24
- Replace `YourResourceKey` with your Speech resource key.
25
25
- Replace `YourResourceRegion` with your Speech resource region.
26
26
- Replace `ProjectId` with a project ID of your choice. The case sensitive ID must be unique within your Speech resource. The ID will be used in the project's URI and can't be changed later.
@@ -29,7 +29,7 @@ Make an HTTP PUT request using the URI as shown in the following [Projects_Creat
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/includes/how-to/professional-voice/create-training-set/rest.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,14 +15,14 @@ In this article, you [create a training set](#create-a-training-set) and get its
15
15
16
16
## Create a training set
17
17
18
-
To create a training set, use the [TrainingSets_Create](/rest/api/speechapi/training-sets/create) operation of the custom voice API. Construct the request body according to the following instructions:
18
+
To create a training set, use the [TrainingSets_Create](/rest/api/aiservices/speechapi/training-sets/create) operation of the custom voice API. Construct the request body according to the following instructions:
19
19
20
20
- Set the required `projectId` property. See [create a project](../../../../professional-voice-create-project.md).
21
21
- Set the required `voiceKind` property to `Male` or `Female`. The kind can't be changed later.
22
22
- Set the required `locale` property. This should be the locale of the training set data. The locale of the training set should be the same as the locale of the [consent statement](../../../../professional-voice-create-consent.md). The locale can't be changed later. You can find the text to speech locale list [here](/azure/ai-services/speech-service/language-support?tabs=tts).
23
23
- Optionally, set the `description` property for the training set description. The training set description can be changed later.
24
24
25
-
Make an HTTP PUT request using the URI as shown in the following [TrainingSets_Create](/rest/api/speechapi/training-sets/create) example.
25
+
Make an HTTP PUT request using the URI as shown in the following [TrainingSets_Create](/rest/api/aiservices/speechapi/training-sets/create) example.
26
26
- Replace `YourResourceKey` with your Speech resource key.
27
27
- Replace `YourResourceRegion` with your Speech resource region.
28
28
- Replace `JessicaTrainingSetId` with a training set ID of your choice. The case sensitive ID will be used in the training set's URI and can't be changed later.
You should receive a response body in the following format:
@@ -53,7 +53,7 @@ You should receive a response body in the following format:
53
53
54
54
## Upload training set data
55
55
56
-
To upload a training set of audio and scripts, use the [TrainingSets_UploadData](/rest/api/speechapi/training-sets/upload-data) operation of the custom voice API.
56
+
To upload a training set of audio and scripts, use the [TrainingSets_UploadData](/rest/api/aiservices/speechapi/training-sets/upload-data) operation of the custom voice API.
57
57
58
58
Before calling this API, please store recording and script files in Azure Blob. In the example below, recording files are https://contoso.blob.core.windows.net/voicecontainer/jessica300/*.wav, script files are
@@ -70,7 +70,7 @@ Construct the request body according to the following instructions:
70
70
- Set the required `extensions` property to the extensions of the script files.
71
71
- Optionally, set the `prefix` property to set a prefix for the blob name.
72
72
73
-
Make an HTTP POST request using the URI as shown in the following [TrainingSets_UploadData](/rest/api/speechapi/training-sets/upload-data) example.
73
+
Make an HTTP POST request using the URI as shown in the following [TrainingSets_UploadData](/rest/api/aiservices/speechapi/training-sets/upload-data) example.
74
74
- Replace `YourResourceKey` with your Speech resource key.
75
75
- Replace `YourResourceRegion` with your Speech resource region.
76
76
- Replace `JessicaTrainingSetId` if you specified a different training set ID in the previous step.
The response header contains the `Operation-Location` property. Use this URI to get details about the [TrainingSets_UploadData](/rest/api/speechapi/training-sets/upload-data) operation. Here's an example of the response header:
98
+
The response header contains the `Operation-Location` property. Use this URI to get details about the [TrainingSets_UploadData](/rest/api/aiservices/speechapi/training-sets/upload-data) operation. Here's an example of the response header:
0 commit comments