You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/regions.md
+41-73Lines changed: 41 additions & 73 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,15 +6,15 @@ author: eric-urban
6
6
manager: nitinme
7
7
ms.service: azure-ai-speech
8
8
ms.topic: conceptual
9
-
ms.date: 9/23/2024
9
+
ms.date: 1/6/2025
10
10
ms.author: eur
11
11
ms.custom: references_regions
12
12
#Customer intent: As a developer, I want to learn about the available regions and endpoints for the Speech service.
13
13
---
14
14
15
15
# Speech service supported regions
16
16
17
-
The Speech service allows your application to convert audio to text, perform speech translation, and convert text to speech. The service is available in multiple regions with unique endpoints for the Speech SDK and REST APIs. You can perform custom configurations to your speech experience, for all regions, at the [Speech Studio](https://aka.ms/speechstudio/).
17
+
The Speech service allows your application to convert audio to text, perform speech translation, and convert text to speech. The service is available in multiple regions with unique endpoints for the Speech SDK and REST APIs.
18
18
19
19
Keep in mind the following points:
20
20
@@ -23,86 +23,54 @@ Keep in mind the following points:
23
23
- Keys created for a region are valid only in that region. If you attempt to use them with other regions, you get authentication errors.
24
24
25
25
> [!NOTE]
26
-
> Speech service doesn't store or process customer data outside the region the customer deploys the service instance in.
26
+
> Speech service doesn't store or process your data outside the region of your Speech resource. The data is only stored or processed in the region where the resource is created. For example, if you create a Speech resource in the `westus` region, the data is only in the `westus` region.
27
27
28
-
## Speech service
28
+
## Regions
29
29
30
-
The following regions are supported for Speech service features such as speech to text, text to speech, pronunciation assessment, and translation. The geographies are listed in alphabetical order.
30
+
The regions in this table support most of the core features of the Speech service, such as speech to text, text to speech, pronunciation assessment, and translation. Some features, such as fast transcription and batch synthesis API, require specific regions. For the features that require specific regions, the table indicates the regions that support them.
31
31
32
-
|**Region**|**Fast transcription**|**Video translation**|**Batch synthesis API**|**Custom speech**|**Custom speech training with audio**<sup>1</sup> |**Custom neural voice**|**Custom neural voice training**<sup>2</sup> |**Custom neural voice high performance endpoint**|**Personal voice**|**Text to speech avatar**|**Custom keyword advanced models**|**Keyword verification**|**Speaker recognition**|**Intent recognition**<sup>3</sup> |**Voice assistants**|
32
+
|**Region**|**Fast transcription**|**Batch synthesis API**|**Custom speech**|**Custom speech training with audio**<sup>1</sup> |**Custom neural voice**|**Custom neural voice training**<sup>2</sup> |**Custom neural voice high performance endpoint**|**Personal voice**|**Text to speech avatar**|**Video translation**|**Custom keyword advanced models**|**Keyword verification**|**Speaker recognition**|**Intent recognition**<sup>3</sup> |**Voice assistants**<sup>4</sup>|
<sup>1</sup> The region has dedicated hardware for custom speech training. If you plan to train a custom model with audio data, you must use one of the regions with dedicated hardware. Then you can [copy the trained model](how-to-custom-speech-train-model.md#copy-a-model) to another region.
66
66
67
67
<sup>2</sup> The region is available for custom neural voice training. You can copy a trained neural voice model to other regions for deployment.
68
68
69
-
<sup>3</sup> For intent recognitiion, the [Speech SDK](speech-sdk.md) supports voice assistant capabilities through [Direct Line Speech](./direct-line-speech.md) for regions in the following table.
69
+
<sup>3</sup> For intent recognition, the [Speech SDK](speech-sdk.md) supports voice assistant capabilities through [Direct Line Speech](./direct-line-speech.md) for regions in the following table.
70
70
71
-
## Intent recognition
71
+
<sup>4</sup> The [Speech SDK](speech-sdk.md) supports voice assistant capabilities through [Direct Line Speech](./direct-line-speech.md) for regions in the following table.
72
72
73
-
Available regions for intent recognition via the Speech SDK are in the following table.
| North America | South Central US |`southcentralus`|
85
-
| North America | West Central US |`westcentralus`|
86
-
| North America | West US |`westus`|
87
-
| North America | West US 2 |`westus2`|
88
-
| South America | Brazil South |`brazilsouth`|
89
-
90
-
This is a subset of the publishing regions supported by the [Language Understanding service (LUIS)](../luis/luis-reference-regions.md).
91
-
92
-
## Voice assistants
93
-
94
-
The [Speech SDK](speech-sdk.md) supports voice assistant capabilities through [Direct Line Speech](./direct-line-speech.md) for regions in the following table.
0 commit comments