You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/how-to-deploy-and-use-endpoint.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,6 +33,10 @@ To create a custom neural voice endpoint:
33
33
1. Select **Custom Voice** > Your project name > **Deploy model** > **Deploy model**.
34
34
1. Select a voice model that you want to associate with this endpoint.
35
35
1. Enter a **Name** and **Description** for your custom endpoint.
36
+
1. Select **Endpoint type** according to your scenario. If your resource is in a supported region, the default setting for the endpoint type is *High performance*. Otherwise, if the resource is in an unsupported region, the only available option is *Fast resume*.
37
+
-*High performance*: Optimized for scenarios with real-time and high-volume synthesis requests, such as conversational AI, call-center bots. It takes around 5 minutes to deploy or resume an endpoint. For information about regions where the *High performance* endpoint type is supported, see the footnotes in the [regions](regions.md#speech-service) table.
38
+
-*Fast resume*: Optimized for audio content creation scenarios with less frequent synthesis requests. Easy and quick to deploy or resume an endpoint in under a minute. The *Fast resume* endpoint type is supported in all [regions](regions.md#speech-service) where text to speech is available.
39
+
36
40
1. Select **Deploy** to create your endpoint.
37
41
38
42
After your endpoint is deployed, the endpoint name appears as a link. Select the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/regions.md
+10-8Lines changed: 10 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -34,31 +34,31 @@ The following regions are supported for Speech service features such as speech t
34
34
| ----- | ----- | ----- |
35
35
| Africa | South Africa North |`southafricanorth` <sup>6</sup>|
36
36
| Asia Pacific | East Asia |`eastasia` <sup>5</sup>|
37
-
| Asia Pacific | Southeast Asia |`southeastasia` <sup>1,2,3,4,5</sup>|
38
-
| Asia Pacific | Australia East |`australiaeast` <sup>1,2,3,4</sup>|
37
+
| Asia Pacific | Southeast Asia |`southeastasia` <sup>1,2,3,4,5,7</sup>|
38
+
| Asia Pacific | Australia East |`australiaeast` <sup>1,2,3,4,7</sup>|
39
39
| Asia Pacific | Central India |`centralindia` <sup>1,2,3,4,5</sup>|
40
40
| Asia Pacific | Japan East |`japaneast` <sup>2,5</sup>|
41
41
| Asia Pacific | Japan West |`japanwest`|
42
42
| Asia Pacific | Korea Central |`koreacentral` <sup>2</sup>|
43
43
| Canada | Canada Central |`canadacentral` <sup>1</sup>|
44
-
| Europe | North Europe |`northeurope` <sup>1,2,4,5</sup>|
45
-
| Europe | West Europe |`westeurope` <sup>1,2,3,4,5</sup>|
44
+
| Europe | North Europe |`northeurope` <sup>1,2,4,5,7</sup>|
45
+
| Europe | West Europe |`westeurope` <sup>1,2,3,4,5,7</sup>|
46
46
| Europe | France Central |`francecentral`|
47
47
| Europe | Germany West Central |`germanywestcentral`|
48
48
| Europe | Norway East |`norwayeast`|
49
49
| Europe | Switzerland North |`switzerlandnorth` <sup>6</sup>|
50
50
| Europe | Switzerland West |`switzerlandwest`|
51
-
| Europe | UK South |`uksouth` <sup>1,2,3,4</sup>|
51
+
| Europe | UK South |`uksouth` <sup>1,2,3,4,7</sup>|
52
52
| Middle East | UAE North |`uaenorth` <sup>6</sup>|
53
53
| South America | Brazil South |`brazilsouth` <sup>6</sup>|
54
54
| US | Central US |`centralus`|
55
-
| US | East US |`eastus` <sup>1,2,3,4,5</sup>|
55
+
| US | East US |`eastus` <sup>1,2,3,4,5,7</sup>|
56
56
| US | East US 2 |`eastus2` <sup>1,2,4,5</sup>|
57
57
| US | North Central US |`northcentralus` <sup>4,6</sup>|
58
-
| US | South Central US |`southcentralus` <sup>1,2,3,4,5,6</sup>|
58
+
| US | South Central US |`southcentralus` <sup>1,2,3,4,5,6,7</sup>|
59
59
| US | West Central US |`westcentralus` <sup>5</sup>|
60
60
| US | West US |`westus` <sup>2,5</sup>|
61
-
| US | West US 2 |`westus2` <sup>1,2,4,5</sup>|
61
+
| US | West US 2 |`westus2` <sup>1,2,4,5,7</sup>|
62
62
| US | West US 3 |`westus3`|
63
63
64
64
<sup>1</sup> The region has dedicated hardware for Custom Speech training. If you plan to train a custom model with audio data, use one of the regions with dedicated hardware for faster training. Then you can [copy the trained model](how-to-custom-speech-train-model.md#copy-a-model) to another region.
@@ -73,6 +73,8 @@ The following regions are supported for Speech service features such as speech t
73
73
74
74
<sup>6</sup> The region does not support Speaker Recognition.
75
75
76
+
<sup>7</sup> The region supports the [high performance](how-to-deploy-and-use-endpoint.md#add-a-deployment-endpoint) endpoint type for Custom Neural Voice.
77
+
76
78
## Intent recognition
77
79
78
80
Available regions for intent recognition via the Speech SDK are in the following table.
0 commit comments