Skip to content

Commit 73241db

Browse files
committed
Update SDK config for private endpoints and move Online Transcription Editor to depreating section
1 parent 3618d15 commit 73241db

File tree

4 files changed

+36
-41
lines changed

4 files changed

+36
-41
lines changed

articles/ai-services/speech-service/how-to-custom-speech-transcription-editor.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,17 +2,20 @@
22
title: How to use the online transcription editor for custom speech - Speech service
33
titleSuffix: Azure AI services
44
description: The online transcription editor allows you to create or edit audio + human-labeled transcriptions for custom speech.
5-
author: PatrickFarley
5+
author: goergenj
66
manager: nitinme
77
ms.service: azure-ai-speech
88
ms.topic: how-to
9-
ms.date: 5/19/2025
10-
ms.author: pafarley
9+
ms.date: 9/11/2025
10+
ms.author: jagoerge
1111
#Customer intent: As a developer, I need to understand how to use the online transcription editor for custom speech so that I can create or edit audio + human-labeled transcriptions for custom speech.
1212
---
1313

1414
# How to use the online transcription editor
1515

16+
[!INCLUDE [deprecation notice](./includes/retire-online-transcription-editor.md)]
17+
18+
1619
The online transcription editor allows you to create or edit audio + human-labeled transcriptions for custom speech. The main use cases of the editor are as follows:
1720

1821
* You only have audio data, but want to build accurate audio + human-labeled datasets from scratch to use in model training.
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
---
2+
author: goergenj
3+
manager: nitinme
4+
ms.service: azure-ai-speech
5+
ms.topic: include
6+
ms.date: 9/11/2025
7+
ms.author: jagoerge
8+
---
9+
10+
> [!IMPORTANT]
11+
> Online transcription editor in Azure AI Speech will be retired on December 15th 2025. You won't be able to use online transcription editor after this date.
12+
>
13+
> This change doesn't affect other Azure AI Speech capabilities such as [speech to text](../speech-to-text.md) (including no change to speaker diarization), [text to speech](../text-to-speech.md), and [speech translation](../speech-translation.md).

articles/ai-services/speech-service/speech-services-private-link.md

Lines changed: 15 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@
22
title: How to use private endpoints with Speech service
33
titleSuffix: Azure AI services
44
description: Learn how to use Speech service with private endpoints provided by Azure Private Link.
5-
author: PatrickFarley
6-
ms.author: pafarley
5+
author: goergenj
6+
ms.author: jagoerge
77
manager: nitinme
88
ms.service: azure-ai-speech
99
ms.topic: how-to
10-
ms.date: 08/07/2025
11-
ms.reviewer: jagoerge
10+
ms.date: 09/11/2025
11+
ms.reviewer: pafarley
1212
ms.custom: devx-track-azurepowershell, devx-track-azurecli
1313
#Customer intent: As a developer, I want to learn how to use Speech service with private endpoints provided by Azure Private Link.
1414
---
@@ -113,16 +113,16 @@ If you plan to access the resource by using only a private endpoint, you can ski
113113
114114
## Adjust an application to use an AI Foundry resource for Speech with a private endpoint
115115

116-
an AI Foundry resource for Speech with a custom domain interacts with the Speech service in a different way.
116+
An AI Foundry resource for Speech with a custom domain interacts with the Speech service in a different way.
117117
This is true for a custom-domain-enabled Speech resource both with and without private endpoints.
118118
Information in this section applies to both scenarios.
119119

120120
Follow instructions in this section to adjust existing applications and solutions to use an AI Foundry resource for Speech with a custom domain name and a private endpoint turned on.
121121

122-
an AI Foundry resource for Speech with a custom domain name and a private endpoint turned on uses a different way to interact with the Speech service. This section explains how to use such a resource with the Speech service REST APIs and the [Speech SDK](speech-sdk.md).
122+
An AI Foundry resource for Speech with a custom domain name and a private endpoint turned on uses a different way to interact with the Speech service. This section explains how to use such a resource with the Speech service REST APIs and the [Speech SDK](speech-sdk.md).
123123

124124
> [!NOTE]
125-
> an AI Foundry resource for Speech without private endpoints that uses a custom domain name also has a special way of interacting with the Speech service.
125+
> An AI Foundry resource for Speech without private endpoints that uses a custom domain name also has a special way of interacting with the Speech service.
126126
> This way differs from the scenario of an AI Foundry resource for Speech that uses a private endpoint.
127127
> This is important to consider because you may decide to remove private endpoints later.
128128
> See [Adjust an application to use an AI Foundry resource for Speech without private endpoints](#adjust-an-application-to-use-an-ai-foundry-resource-for-speech-without-private-endpoints) later in this article.
@@ -244,36 +244,31 @@ A "standard" endpoint URL looks like: <p/>`{region}.{speech service offering}.sp
244244

245245
A private endpoint URL looks like: <p/>`{your custom name}.cognitiveservices.azure.com/{speech service offering}/{URL path}`
246246

247+
The Speech SDK automatically will configure the <p/>`/{URL path}` depending on the service used. Therefor only the <p/>`/{baseURL}` must be configured as described.
248+
247249
**Example 1.** An application is communicating by using the following URL (speech recognition using the base model for US English in West Europe):
248250

249251
```
250-
wss://westeurope.stt.speech.microsoft.com/speech/recognition/conversation/cognitiveservices/v1?language=en-US
252+
wss://westeurope.stt.speech.microsoft.com
251253
```
252254

253255
To use it in the private-endpoint-enabled scenario when the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`, you must modify the URL like this:
254256

255257
```
256-
wss://my-private-link-speech.cognitiveservices.azure.com/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US
258+
wss://my-private-link-speech.cognitiveservices.azure.com
257259
```
258260

259-
Notice the details:
260-
261-
- The host name `westeurope.stt.speech.microsoft.com` is replaced by the custom domain host name `my-private-link-speech.cognitiveservices.azure.com`.
262-
- The second element of the original DNS name (`stt`) becomes the first element of the URL path and precedes the original path. So the original URL `/speech/recognition/conversation/cognitiveservices/v1?language=en-US` becomes `/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US`.
263-
264261
**Example 2.** An application uses the following URL to synthesize speech in West Europe:
265262
```
266-
wss://westeurope.tts.speech.microsoft.com/cognitiveservices/websocket/v1
263+
wss://westeurope.tts.speech.microsoft.com
267264
```
268265

269266
The following equivalent URL uses a private endpoint, where the custom domain name of the Speech resource is `my-private-link-speech.cognitiveservices.azure.com`:
270267

271268
```
272-
wss://my-private-link-speech.cognitiveservices.azure.com/tts/cognitiveservices/websocket/v1
269+
wss://my-private-link-speech.cognitiveservices.azure.com
273270
```
274271

275-
The same principle in Example 1 is applied, but the key element this time is `tts`.
276-
277272
#### Modifying applications
278273

279274
Follow these steps to modify your code:
@@ -309,7 +304,7 @@ Follow these steps to modify your code:
309304
- `speechKey` contains the key of the private-endpoint-enabled Speech resource.
310305
- `endPoint` contains the full *modified* endpoint URL (using the type required by the corresponding programming language). In our example, this variable should contain:
311306
```
312-
wss://my-private-link-speech.cognitiveservices.azure.com/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US
307+
wss://my-private-link-speech.cognitiveservices.azure.com
313308
```
314309

315310
Create a `SpeechConfig` instance:
@@ -334,11 +329,6 @@ Follow these steps to modify your code:
334329
config: sdk.SpeechConfig = sdk.SpeechConfig.fromEndpoint(new URL(endPoint), speechKey);
335330
```
336331

337-
> [!TIP]
338-
> The query parameters specified in the endpoint URI are not changed, even if they're set by other APIs. For example, if the recognition language is defined in the URI as query parameter `language=en-US`, and is also set to `ru-RU` via the corresponding property, the language setting in the URI is used. The effective language is then `en-US`.
339-
>
340-
> Parameters set in the endpoint URI always take precedence. Other APIs can override only parameters that are not specified in the endpoint URI.
341-
342332
After this modification, your application should work with the private-endpoint-enabled Speech resources. We're working on more seamless support of private endpoint scenarios.
343333

344334
[!INCLUDE [](includes/speech-studio-vnet.md)]
@@ -391,19 +381,8 @@ In this case, usage of the Speech to text REST API for short audio and usage of
391381

392382
### Speech resource with a custom domain name and without private endpoints: Usage with the Speech SDK
393383

394-
Using the Speech SDK with custom-domain-enabled Speech resources *without* private endpoints is equivalent to the general case as described in the [Speech SDK documentation](speech-sdk.md).
395-
396-
In case you have modified your code for using with a [private-endpoint-enabled Speech resource](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk), consider the following.
397-
398-
In the section on [private-endpoint-enabled Speech resources](#speech-resource-with-a-custom-domain-name-and-a-private-endpoint-usage-with-the-speech-sdk), we explained how to determine the endpoint URL, modify it, and make it work through "from endpoint"/"with endpoint" initialization of the `SpeechConfig` class instance.
384+
Using the Speech SDK with custom-domain-enabled Speech resources *without* private endpoints is equivalent to the configuration described *with* private endpoints in this document.
399385

400-
However, if you try to run the same application after having all private endpoints removed (allowing some time for the corresponding DNS record reprovisioning), you'll get an internal service error (404). The reason is that the [DNS record](#dns-configuration) now points to the regional Azure AI services endpoint instead of the virtual network proxy, and the URL paths like `/stt/speech/recognition/conversation/cognitiveservices/v1?language=en-US` isn't found there.
401-
402-
You need to roll back your application to the standard instantiation of `SpeechConfig` in the style of the following code:
403-
404-
```csharp
405-
var config = SpeechConfig.FromSubscription(speechKey, azureRegion);
406-
```
407386

408387
[!INCLUDE [](includes/speech-vnet-service-enpoints-private-endpoints-simultaneously.md)]
409388

articles/ai-services/speech-service/toc.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -80,8 +80,6 @@ items:
8080
href: how-to-custom-speech-human-labeled-transcriptions.md
8181
- name: Structured text phonetic pronunciation
8282
href: customize-pronunciation.md
83-
- name: Online transcription editor
84-
href: how-to-custom-speech-transcription-editor.md
8583
- name: Display text format training data
8684
href: how-to-custom-speech-display-text-format.md
8785
- name: Custom speech model lifecycle
@@ -504,6 +502,8 @@ items:
504502
href: get-started-intent-recognition-clu.md
505503
- name: Recognize speech intents with LUIS (deprecated)
506504
href: get-started-intent-recognition.md
505+
- name: Online transcription editor
506+
href: how-to-custom-speech-transcription-editor.md
507507
- name: Speech to text REST API migration
508508
items:
509509
- name: From Speech to text v3.2 to 2024-11-15

0 commit comments

Comments
 (0)