Skip to content

Commit 9e8a788

Browse files
authored
Merge pull request #7045 from goergenj/jagoerge-privendpointfix+deprecationnote
Update SDK config for private endpoints and move Online Transcription…
2 parents 3618d15 + ffbb8bd commit 9e8a788

File tree

4 files changed

+41
-83
lines changed

4 files changed

+41
-83
lines changed

articles/ai-services/speech-service/how-to-custom-speech-transcription-editor.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,17 +2,20 @@
22
title: How to use the online transcription editor for custom speech - Speech service
33
titleSuffix: Azure AI services
44
description: The online transcription editor allows you to create or edit audio + human-labeled transcriptions for custom speech.
5-
author: PatrickFarley
5+
author: goergenj
66
manager: nitinme
77
ms.service: azure-ai-speech
88
ms.topic: how-to
9-
ms.date: 5/19/2025
10-
ms.author: pafarley
9+
ms.date: 9/11/2025
10+
ms.author: jagoerge
1111
#Customer intent: As a developer, I need to understand how to use the online transcription editor for custom speech so that I can create or edit audio + human-labeled transcriptions for custom speech.
1212
---
1313

1414
# How to use the online transcription editor
1515

16+
[!INCLUDE [deprecation notice](./includes/retire-online-transcription-editor.md)]
17+
18+
1619
The online transcription editor allows you to create or edit audio + human-labeled transcriptions for custom speech. The main use cases of the editor are as follows:
1720

1821
* You only have audio data, but want to build accurate audio + human-labeled datasets from scratch to use in model training.
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
---
2+
author: goergenj
3+
manager: nitinme
4+
ms.service: azure-ai-speech
5+
ms.topic: include
6+
ms.date: 9/11/2025
7+
ms.author: jagoerge
8+
---
9+
10+
> [!IMPORTANT]
11+
> Online transcription editor in Azure AI Speech will be retired on December 15th 2025. You won't be able to use online transcription editor after this date.
12+
>
13+
> This change doesn't affect other Azure AI Speech capabilities such as [speech to text](../speech-to-text.md) (including no change to speaker diarization), [text to speech](../text-to-speech.md), and [speech translation](../speech-translation.md).

articles/ai-services/speech-service/speech-services-private-link.md

Lines changed: 20 additions & 78 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@
22
title: How to use private endpoints with Speech service
33
titleSuffix: Azure AI services
44
description: Learn how to use Speech service with private endpoints provided by Azure Private Link.
5-
author: PatrickFarley
6-
ms.author: pafarley
5+
author: goergenj
6+
ms.author: jagoerge
77
manager: nitinme
88
ms.service: azure-ai-speech
99
ms.topic: how-to
10-
ms.date: 08/07/2025
11-
ms.reviewer: jagoerge
10+
ms.date: 09/11/2025
11+
ms.reviewer: pafarley
1212
ms.custom: devx-track-azurepowershell, devx-track-azurecli
1313
#Customer intent: As a developer, I want to learn how to use Speech service with private endpoints provided by Azure Private Link.
1414
---
@@ -34,7 +34,7 @@ This article describes the usage of the private endpoints with Speech service. U
3434

3535
## Create a custom domain name
3636
> [!CAUTION]
37-
> an AI Foundry resource for Speech with a custom domain name enabled uses a different way to interact with Speech service. You might have to adjust your application code for both of these scenarios: [with private endpoint](#adjust-an-application-to-use-an-ai-foundry-resource-for-speech-with-a-private-endpoint) and [*without* private endpoint](#adjust-an-application-to-use-an-ai-foundry-resource-for-speech-without-private-endpoints).
37+
> An AI Foundry resource for Speech with a custom domain name enabled uses a different way to interact with Speech service. You might have to adjust your application code for both of these scenarios: [with private endpoint](#adjust-an-application-to-use-an-ai-foundry-resource-for-speech-with-a-private-endpoint) and [*without* private endpoint](#adjust-an-application-to-use-an-ai-foundry-resource-for-speech-without-private-endpoints).
3838
>
3939
4040
[!INCLUDE [Custom Domain include](includes/how-to/custom-domain.md)]
@@ -113,16 +113,16 @@ If you plan to access the resource by using only a private endpoint, you can ski
113113
114114
## Adjust an application to use an AI Foundry resource for Speech with a private endpoint
115115

116-
an AI Foundry resource for Speech with a custom domain interacts with the Speech service in a different way.
116+
An AI Foundry resource for Speech with a custom domain interacts with the Speech service in a different way.
117117
This is true for a custom-domain-enabled Speech resource both with and without private endpoints.
118118
Information in this section applies to both scenarios.
119119

120120
Follow instructions in this section to adjust existing applications and solutions to use an AI Foundry resource for Speech with a custom domain name and a private endpoint turned on.
121121

122-
an AI Foundry resource for Speech with a custom domain name and a private endpoint turned on uses a different way to interact with the Speech service. This section explains how to use such a resource with the Speech service REST APIs and the [Speech SDK](speech-sdk.md).
122+
An AI Foundry resource for Speech with a custom domain name and a private endpoint turned on uses a different way to interact with the Speech service. This section explains how to use such a resource with the Speech service REST APIs and the [Speech SDK](speech-sdk.md).
123123

124124
> [!NOTE]
125-
> an AI Foundry resource for Speech without private endpoints that uses a custom domain name also has a special way of interacting with the Speech service.
125+
> An AI Foundry resource for Speech without private endpoints that uses a custom domain name also has a special way of interacting with the Speech service.
126126
> This way differs from the scenario of an AI Foundry resource for Speech that uses a private endpoint.
127127
> This is important to consider because you may decide to remove private endpoints later.
128128
> See [Adjust an application to use an AI Foundry resource for Speech without private endpoints](#adjust-an-application-to-use-an-ai-foundry-resource-for-speech-without-private-endpoints) later in this article.
@@ -242,60 +242,18 @@ Private-endpoint-enabled endpoints communicate with Speech service via a special
242242

243243
A "standard" endpoint URL looks like: <p/>`{region}.{speech service offering}.speech.microsoft.com/{URL path}`
244244

245-
A private endpoint URL looks like: <p/>`{your custom name}.cognitiveservices.azure.com/{speech service offering}/{URL path}`
245+
A private endpoint URL looks like: <p/>`{your custom name}.cognitiveservices.azure.com/{URL path}`
246246

247-
**Example 1.** An application is communicating by using the following URL (speech recognition using the base model for US English in West Europe):
248-
249-
```
250-
wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US
251-
```
252-
253-
To use it in the private-endpoint-enabled scenario when the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`, you must modify the URL like this:
254-
255-
```
256-
wss://my-private-link-speech.cognitiveservices.azure.com/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US
257-
```
258-
259-
Notice the details:
260-
261-
- The host name `westeurope.stt.speech.microsoft.com` is replaced by the custom domain host name `my-private-link-speech.cognitiveservices.azure.com`.
262-
- The second element of the original DNS name (`stt`) becomes the first element of the URL path and precedes the original path. So the original URL `/speech/recognition/conversation/cognitiveservices/v1?language=en-US` becomes `/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
263-
264-
**Example 2.** An application uses the following URL to synthesize speech in West Europe:
265-
```
266-
wss://westeurope.tts.speech.microsoft.com/cognitiveservices/websocket/v1
267-
```
268-
269-
The following equivalent URL uses a private endpoint, where the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`:
270-
271-
```
272-
wss://my-private-link-speech.cognitiveservices.azure.com/tts/cognitiveservices/websocket/v1
273-
```
274-
275-
The same principle in Example 1 is applied, but the key element this time is `tts`.
247+
The Speech SDK automatically will configure the `/{URL path}` depending on the service used.
248+
Therefore only the `/{baseURL}` must be configured as described.
276249

277250
#### Modifying applications
278251

279252
Follow these steps to modify your code:
280253

281-
1. Determine the application endpoint URL:
282-
283-
- [Turn on logging for your application](how-to-use-logging.md) and run it to log activity.
284-
- In the log file, search for `SPEECH-ConnectionUrl`. In matching lines, the `value` parameter contains the full URL that your application used to reach the Speech service.
285-
286-
Example:
287-
288-
```
289-
(114917): 41ms SPX_DBG_TRACE_VERBOSE: property_bag_impl.cpp:138 ISpxPropertyBagImpl::LogPropertyAndValue: this=0x0000028FE4809D78; name='SPEECH-ConnectionUrl'; value='wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?traffictype=spx&language=en-US'
290-
```
291-
292-
So the URL that the application used in this example is:
293-
294-
```
295-
wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US
296-
```
254+
1. Determine the application endpoint URL from the 'Keys and Endpoints' menu of your resource on Azure portal. In this example it would be `my-private-link-speech.cognitiveservices.azure.com`.
297255

298-
2. Create a `SpeechConfig` instance by using a full endpoint URL:
256+
2. Create a `SpeechConfig` instance by using an endpoint URL:
299257

300258
1. Modify the endpoint that you determined, as described in the earlier [Construct endpoint URL](#construct-endpoint-url) section.
301259

@@ -307,9 +265,9 @@ Follow these steps to modify your code:
307265

308266
To make it work, modify how you instantiate the `SpeechConfig` class and use "from endpoint"/"with endpoint" initialization. Suppose we have the following two variables defined:
309267
- `speechKey` contains the key of the private-endpoint-enabled Speech resource.
310-
- `endPoint` contains the full *modified* endpoint URL (using the type required by the corresponding programming language). In our example, this variable should contain:
268+
- `endPoint` contains the *modified* endpoint URL (using the type required by the corresponding programming language). In our example, this variable should contain:
311269
```
312-
wss://my-private-link-speech.cognitiveservices.azure.com/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US
270+
wss://my-private-link-speech.cognitiveservices.azure.com
313271
```
314272

315273
Create a `SpeechConfig` instance:
@@ -334,13 +292,13 @@ Follow these steps to modify your code:
334292
config: sdk.SpeechConfig = sdk.SpeechConfig.fromEndpoint(new URL(endPoint), speechKey);
335293
```
336294

337-
> [!TIP]
338-
> The query parameters specified in the endpoint URI are not changed, even if they're set by other APIs. For example, if the recognition language is defined in the URI as query parameter `language=en-US`, and is also set to `ru-RU` via the corresponding property, the language setting in the URI is used. The effective language is then `en-US`.
339-
>
340-
> Parameters set in the endpoint URI always take precedence. Other APIs can override only parameters that are not specified in the endpoint URI.
341-
342295
After this modification, your application should work with the private-endpoint-enabled Speech resources. We're working on more seamless support of private endpoint scenarios.
343296

297+
### Speech resource with a custom domain name and without private endpoints: Usage with the Speech SDK
298+
299+
Using the Speech SDK with custom-domain-enabled Speech resources *without* private endpoints is equivalent to the configuration described *with* private endpoints in this document.
300+
301+
344302
[!INCLUDE [](includes/speech-studio-vnet.md)]
345303

346304

@@ -389,22 +347,6 @@ In this case, usage of the Speech to text REST API for short audio and usage of
389347
>
390348
> Using an authorization token and passing it to the special endpoint via the `Authorization` header will work *only* if you've turned on the **All networks** access option in the **Networking** section of your Speech resource. In other cases you will get either `Forbidden` or `BadRequest` error when trying to obtain an authorization token.
391349

392-
### Speech resource with a custom domain name and without private endpoints: Usage with the Speech SDK
393-
394-
Using the Speech SDK with custom-domain-enabled Speech resources *without* private endpoints is equivalent to the general case as described in the [Speech SDK documentation](speech-sdk.md).
395-
396-
In case you have modified your code for using with a [private-endpoint-enabled Speech resource](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk), consider the following.
397-
398-
In the section on [private-endpoint-enabled Speech resources](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk), we explained how to determine the endpoint URL, modify it, and make it work through "from endpoint"/"with endpoint" initialization of the `SpeechConfig` class instance.
399-
400-
However, if you try to run the same application after having all private endpoints removed (allowing some time for the corresponding DNS record reprovisioning), you'll get an internal service error (404). The reason is that the [DNS record](#dns-configuration) now points to the regional Azure AI services endpoint instead of the virtual network proxy, and the URL paths like `/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US` isn't found there.
401-
402-
You need to roll back your application to the standard instantiation of `SpeechConfig` in the style of the following code:
403-
404-
```csharp
405-
var config = SpeechConfig.FromSubscription(speechKey, azureRegion);
406-
```
407-
408350
[!INCLUDE [](includes/speech-vnet-service-enpoints-private-endpoints-simultaneously.md)]
409351

410352
## Pricing

articles/ai-services/speech-service/toc.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -80,8 +80,6 @@ items:
8080
href: how-to-custom-speech-human-labeled-transcriptions.md
8181
- name: Structured text phonetic pronunciation
8282
href: customize-pronunciation.md
83-
- name: Online transcription editor
84-
href: how-to-custom-speech-transcription-editor.md
8583
- name: Display text format training data
8684
href: how-to-custom-speech-display-text-format.md
8785
- name: Custom speech model lifecycle
@@ -504,6 +502,8 @@ items:
504502
href: get-started-intent-recognition-clu.md
505503
- name: Recognize speech intents with LUIS (deprecated)
506504
href: get-started-intent-recognition.md
505+
- name: Online transcription editor
506+
href: how-to-custom-speech-transcription-editor.md
507507
- name: Speech to text REST API migration
508508
items:
509509
- name: From Speech to text v3.2 to 2024-11-15

0 commit comments

Comments
 (0)