You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/how-to-custom-speech-transcription-editor.md
+6-3Lines changed: 6 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,17 +2,20 @@
2
2
title: How to use the online transcription editor for custom speech - Speech service
3
3
titleSuffix: Azure AI services
4
4
description: The online transcription editor allows you to create or edit audio + human-labeled transcriptions for custom speech.
5
-
author: PatrickFarley
5
+
author: goergenj
6
6
manager: nitinme
7
7
ms.service: azure-ai-speech
8
8
ms.topic: how-to
9
-
ms.date: 5/19/2025
10
-
ms.author: pafarley
9
+
ms.date: 9/11/2025
10
+
ms.author: jagoerge
11
11
#Customer intent: As a developer, I need to understand how to use the online transcription editor for custom speech so that I can create or edit audio + human-labeled transcriptions for custom speech.
The online transcription editor allows you to create or edit audio + human-labeled transcriptions for custom speech. The main use cases of the editor are as follows:
17
20
18
21
* You only have audio data, but want to build accurate audio + human-labeled datasets from scratch to use in model training.
> Online transcription editor in Azure AI Speech will be retired on December 15th 2025. You won't be able to use online transcription editor after this date.
12
+
>
13
+
> This change doesn't affect other Azure AI Speech capabilities such as [speech to text](../speech-to-text.md) (including no change to speaker diarization), [text to speech](../text-to-speech.md), and [speech translation](../speech-translation.md).
#Customer intent: As a developer, I want to learn how to use Speech service with private endpoints provided by Azure Private Link.
14
14
---
@@ -34,7 +34,7 @@ This article describes the usage of the private endpoints with Speech service. U
34
34
35
35
## Create a custom domain name
36
36
> [!CAUTION]
37
-
> an AI Foundry resource for Speech with a custom domain name enabled uses a different way to interact with Speech service. You might have to adjust your application code for both of these scenarios: [with private endpoint](#adjust-an-application-to-use-an-ai-foundry-resource-for-speech-with-a-private-endpoint) and [*without* private endpoint](#adjust-an-application-to-use-an-ai-foundry-resource-for-speech-without-private-endpoints).
37
+
> An AI Foundry resource for Speech with a custom domain name enabled uses a different way to interact with Speech service. You might have to adjust your application code for both of these scenarios: [with private endpoint](#adjust-an-application-to-use-an-ai-foundry-resource-for-speech-with-a-private-endpoint) and [*without* private endpoint](#adjust-an-application-to-use-an-ai-foundry-resource-for-speech-without-private-endpoints).
@@ -113,16 +113,16 @@ If you plan to access the resource by using only a private endpoint, you can ski
113
113
114
114
## Adjust an application to use an AI Foundry resource for Speech with a private endpoint
115
115
116
-
an AI Foundry resource for Speech with a custom domain interacts with the Speech service in a different way.
116
+
An AI Foundry resource for Speech with a custom domain interacts with the Speech service in a different way.
117
117
This is true for a custom-domain-enabled Speech resource both with and without private endpoints.
118
118
Information in this section applies to both scenarios.
119
119
120
120
Follow instructions in this section to adjust existing applications and solutions to use an AI Foundry resource for Speech with a custom domain name and a private endpoint turned on.
121
121
122
-
an AI Foundry resource for Speech with a custom domain name and a private endpoint turned on uses a different way to interact with the Speech service. This section explains how to use such a resource with the Speech service REST APIs and the [Speech SDK](speech-sdk.md).
122
+
An AI Foundry resource for Speech with a custom domain name and a private endpoint turned on uses a different way to interact with the Speech service. This section explains how to use such a resource with the Speech service REST APIs and the [Speech SDK](speech-sdk.md).
123
123
124
124
> [!NOTE]
125
-
> an AI Foundry resource for Speech without private endpoints that uses a custom domain name also has a special way of interacting with the Speech service.
125
+
> An AI Foundry resource for Speech without private endpoints that uses a custom domain name also has a special way of interacting with the Speech service.
126
126
> This way differs from the scenario of an AI Foundry resource for Speech that uses a private endpoint.
127
127
> This is important to consider because you may decide to remove private endpoints later.
128
128
> See [Adjust an application to use an AI Foundry resource for Speech without private endpoints](#adjust-an-application-to-use-an-ai-foundry-resource-for-speech-without-private-endpoints) later in this article.
@@ -242,60 +242,18 @@ Private-endpoint-enabled endpoints communicate with Speech service via a special
242
242
243
243
A "standard" endpoint URL looks like: <p/>`{region}.{speech service offering}.speech.microsoft.com/{URL path}`
244
244
245
-
A private endpoint URL looks like: <p/>`{your custom name}.cognitiveservices.azure.com/{speech service offering}/{URL path}`
245
+
A private endpoint URL looks like: <p/>`{your custom name}.cognitiveservices.azure.com/{URL path}`
246
246
247
-
**Example 1.** An application is communicating by using the following URL (speech recognition using the base model for US English in West Europe):
To use it in the private-endpoint-enabled scenario when the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`, you must modify the URL like this:
- The host name `westeurope.stt.speech.microsoft.com` is replaced by the custom domain host name `my-private-link-speech.cognitiveservices.azure.com`.
262
-
- The second element of the original DNS name (`stt`) becomes the first element of the URL path and precedes the original path. So the original URL `/speech/recognition/conversation/cognitiveservices/v1?language=en-US` becomes `/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
263
-
264
-
**Example 2.** An application uses the following URL to synthesize speech in West Europe:
The following equivalent URL uses a private endpoint, where the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`:
The same principle in Example 1 is applied, but the key element this time is `tts`.
247
+
The Speech SDK automatically will configure the `/{URL path}` depending on the service used.
248
+
Therefore only the `/{baseURL}` must be configured as described.
276
249
277
250
#### Modifying applications
278
251
279
252
Follow these steps to modify your code:
280
253
281
-
1. Determine the application endpoint URL:
282
-
283
-
-[Turn on logging for your application](how-to-use-logging.md) and run it to log activity.
284
-
- In the log file, search for `SPEECH-ConnectionUrl`. In matching lines, the `value` parameter contains the full URL that your application used to reach the Speech service.
1. Determine the application endpoint URL from the 'Keys and Endpoints' menu of your resource on Azure portal. In this example it would be `my-private-link-speech.cognitiveservices.azure.com`.
297
255
298
-
2. Create a `SpeechConfig` instance by using a full endpoint URL:
256
+
2. Create a `SpeechConfig` instance by using an endpoint URL:
299
257
300
258
1. Modify the endpoint that you determined, as described in the earlier [Construct endpoint URL](#construct-endpoint-url) section.
301
259
@@ -307,9 +265,9 @@ Follow these steps to modify your code:
307
265
308
266
Tomakeitwork, modifyhowyouinstantiatethe `SpeechConfig` classand use "from endpoint"/"with endpoint" initialization. Suppose we have the following two variables defined:
>ThequeryparametersspecifiedintheendpointURIarenotchanged, evenifthey're set by other APIs. For example, if the recognition language is defined in the URI as query parameter `language=en-US`, and is also set to `ru-RU` via the corresponding property, the language setting in the URI is used. The effective language is then `en-US`.
Afterthismodification, yourapplicationshouldworkwiththeprivate-endpoint-enabledSpeechresources. We're working on more seamless support of private endpoint scenarios.
343
296
297
+
### Speech resource with a custom domain name and without private endpoints: Usage with the Speech SDK
0 commit comments