Skip to content

Commit e2d952a

Browse files
committed
consistent regions for STT REST examples
1 parent 63d5cac commit e2d952a

9 files changed

+37
-37
lines changed

articles/cognitive-services/Speech-Service/faq-stt.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ sections:
9494
answer: |
9595
By default, requests aren't logged (neither audio nor transcription). If necessary, you can select the **Log content from this endpoint** option when you [create a custom endpoint](how-to-custom-speech-deploy-model.md#add-a-deployment-endpoint). You can also enable audio logging in the [Speech SDK](how-to-use-logging.md) on a per-request basis, without having to create a custom endpoint. In both cases, audio and recognition results of requests will be stored in secure storage. Subscriptions that use Microsoft-owned storage will be available for 30 days.
9696
97-
You can export the logged files on the deployment page in Speech Studio if you use a custom endpoint with **Log content from this endpoint** enabled. If audio logging is enabled via the SDK, call the [API](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/Endpoints_ListBaseModelLogs) to access the files. You can also use API to [delete the logs](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/Endpoints_DeleteBaseModelLogs) any time.
97+
You can export the logged files on the deployment page in Speech Studio if you use a custom endpoint with **Log content from this endpoint** enabled. If audio logging is enabled via the SDK, call the [API](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/Endpoints_ListBaseModelLogs) to access the files. You can also use API to [delete the logs](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/Endpoints_DeleteBaseModelLogs) any time.
9898
9999
- question: |
100100
Are my requests throttled?

articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ When a custom model or base model expires, it is no longer available for transcr
4343
|Transcription route |Expired model result |Recommendation |
4444
|---------|---------|---------|
4545
|Custom endpoint|Speech recognition requests will fall back to the most recent base model for the same [locale](language-support.md?tabs=stt-tts). You will get results, but recognition might not accurately transcribe your domain data. |Update the endpoint's model as described in the [Deploy a Custom Speech model](how-to-custom-speech-deploy-model.md) guide. |
46-
|Batch transcription |[Batch transcription](batch-transcription.md) requests for expired models will fail with a 4xx error. |In each [CreateTranscription](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription) REST API request body, set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. |
46+
|Batch transcription |[Batch transcription](batch-transcription.md) requests for expired models will fail with a 4xx error. |In each [CreateTranscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CreateTranscription) REST API request body, set the `model` property to a base model or custom model that hasn't yet expired. Otherwise don't include the `model` property to always use the latest base model. |
4747

4848

4949
## Get base model expiration dates

articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -249,13 +249,13 @@ You should receive a response body in the following format:
249249

250250
```json
251251
{
252-
"self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae",
252+
"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae",
253253
"baseModel": {
254-
"self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/eb5450a7-3ca2-461a-b2d7-ddbb3ad96540"
254+
"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/base/eb5450a7-3ca2-461a-b2d7-ddbb3ad96540"
255255
},
256256
"links": {
257-
"manifest": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae/manifest",
258-
"copyTo": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae/copyto"
257+
"manifest": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae/manifest",
258+
"copyTo": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/9df35ddb-edf9-4e91-8d1a-576d09aabdae/copyto"
259259
},
260260
"properties": {
261261
"deprecationDates": {
@@ -313,7 +313,7 @@ You should receive a response body in the following format:
313313
```json
314314
{
315315
"project": {
316-
"self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
316+
"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
317317
},
318318
}
319319
```
@@ -328,16 +328,16 @@ spx help csr model
328328

329329
::: zone pivot="rest-api"
330330

331-
To connect a new model to a project of the Speech resource where the model was copied, use the [UpdateModel](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
331+
To connect a new model to a project of the Speech resource where the model was copied, use the [UpdateModel](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel) operation of the [Speech-to-text REST API](rest-speech-to-text.md). Construct the request body according to the following instructions:
332332

333-
- Set the required `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [GetProjects](https://westus2.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects.
333+
- Set the required `project` property to the URI of an existing project. This is recommended so that you can also view and manage the model in Speech Studio. You can make a [GetProjects](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/GetProjects) request to get available projects.
334334

335335
Make an HTTP PATCH request using the URI as shown in the following example. Use the URI of the new model. You can get the new model ID from the `self` property of the [CopyModelToSubscriptionToSubscription](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscriptionToSubscription) response body. Replace `YourSubscriptionKey` with your Speech resource key, replace `YourServiceRegion` with your Speech resource region, and set the request body properties as previously described.
336336

337337
```azurecli-interactive
338338
curl -v -X PATCH -H "Ocp-Apim-Subscription-Key: YourSubscriptionKey" -H "Content-Type: application/json" -d '{
339339
"project": {
340-
"self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
340+
"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
341341
},
342342
}' "https://YourServiceRegion.api.cognitive.microsoft.com/speechtotext/v3.0/models"
343343
```
@@ -347,7 +347,7 @@ You should receive a response body in the following format:
347347
```json
348348
{
349349
"project": {
350-
"self": "https://westus2.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
350+
"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/projects/e6ffdefd-9517-45a9-a89c-7b5028ed0e56"
351351
},
352352
}
353353
```

articles/cognitive-services/Speech-Service/includes/cognitive-services-speech-service-rest-auth.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ This example is a simple HTTP request to get a token. Replace `YOUR_SUBSCRIPTION
4444
```http
4545
POST /sts/v1.0/issueToken HTTP/1.1
4646
Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY
47-
Host: westus.api.cognitive.microsoft.com
47+
Host: eastus.api.cognitive.microsoft.com
4848
Content-type: application/x-www-form-urlencoded
4949
Content-Length: 0
5050
```
@@ -62,7 +62,7 @@ $FetchTokenHeader = @{
6262
'Ocp-Apim-Subscription-Key' = 'YOUR_SUBSCRIPTION_KEY'
6363
}
6464
65-
$OAuthToken = Invoke-RestMethod -Method POST -Uri https://westus.api.cognitive.microsoft.com/sts/v1.0/issueToken
65+
$OAuthToken = Invoke-RestMethod -Method POST -Uri https://eastus.api.cognitive.microsoft.com/sts/v1.0/issueToken
6666
-Headers $FetchTokenHeader
6767
6868
# show the token received
@@ -76,7 +76,7 @@ cURL is a command-line tool available in Linux (and in the Windows Subsystem for
7676

7777
```console
7878
curl -v -X POST \
79-
"https://westus.api.cognitive.microsoft.com/sts/v1.0/issueToken" \
79+
"https://eastus.api.cognitive.microsoft.com/sts/v1.0/issueToken" \
8080
-H "Content-type: application/x-www-form-urlencoded" \
8181
-H "Content-Length: 0" \
8282
-H "Ocp-Apim-Subscription-Key: YOUR_SUBSCRIPTION_KEY"
@@ -90,7 +90,7 @@ This C# class illustrates how to get an access token. Pass your resource key for
9090
public class Authentication
9191
{
9292
public static readonly string FetchTokenUri =
93-
"https://westus.api.cognitive.microsoft.com/sts/v1.0/issueToken";
93+
"https://eastus.api.cognitive.microsoft.com/sts/v1.0/issueToken";
9494
private string subscriptionKey;
9595
private string token;
9696

@@ -131,7 +131,7 @@ subscription_key = 'REPLACE_WITH_YOUR_KEY'
131131

132132

133133
def get_token(subscription_key):
134-
fetch_token_url = 'https://westus.api.cognitive.microsoft.com/sts/v1.0/issueToken'
134+
fetch_token_url = 'https://eastus.api.cognitive.microsoft.com/sts/v1.0/issueToken'
135135
headers = {
136136
'Ocp-Apim-Subscription-Key': subscription_key
137137
}

articles/cognitive-services/Speech-Service/migrate-v2-to-v3.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ General changes:
7070

7171
### Host name changes
7272

73-
Endpoint host names have changed from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com`. Paths to the new endpoints no longer contain `api/` because it's part of the hostname. The [Speech-to-text REST API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) reference documentation lists valid regions and paths.
73+
Endpoint host names have changed from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com`. Paths to the new endpoints no longer contain `api/` because it's part of the hostname. The [Speech-to-text REST API v3.0](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0) reference documentation lists valid regions and paths.
7474
>[!IMPORTANT]
7575
>Change the hostname from `{region}.cris.ai` to `{region}.api.cognitive.microsoft.com` where region is the region of your speech subscription. Also remove `api/`from any path in your client code.
7676
@@ -99,14 +99,14 @@ If the entity has additional functionality available through other paths, they a
9999

100100
```json
101101
{
102-
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
102+
"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
103103
"createdDateTime": "2019-01-07T11:34:12Z",
104104
"lastActionDateTime": "2019-01-07T11:36:07Z",
105105
"status": "Succeeded",
106106
"locale": "en-US",
107107
"displayName": "Transcription using locale en-US",
108108
"links": {
109-
"files": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files"
109+
"files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files"
110110
}
111111
}
112112
```
@@ -277,9 +277,9 @@ to access the content of each file. To control the validity duration of the SAS
277277

278278
```json
279279
{
280-
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
280+
"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
281281
"links": {
282-
"files": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files"
282+
"files": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files"
283283
}
284284
}
285285
```
@@ -290,7 +290,7 @@ to access the content of each file. To control the validity duration of the SAS
290290
{
291291
"values": [
292292
{
293-
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files/f23e54f5-ed74-4c31-9730-2f1a3ef83ce8",
293+
"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files/f23e54f5-ed74-4c31-9730-2f1a3ef83ce8",
294294
"name": "Name",
295295
"kind": "Transcription",
296296
"properties": {
@@ -302,7 +302,7 @@ to access the content of each file. To control the validity duration of the SAS
302302
}
303303
},
304304
{
305-
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files/28bc946b-c251-4a86-84f6-ea0f0a2373ef",
305+
"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files/28bc946b-c251-4a86-84f6-ea0f0a2373ef",
306306
"name": "Name",
307307
"kind": "TranscriptionReport",
308308
"properties": {
@@ -314,7 +314,7 @@ to access the content of each file. To control the validity duration of the SAS
314314
}
315315
}
316316
],
317-
"@nextLink": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files?skip=2&top=2"
317+
"@nextLink": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3/files?skip=2&top=2"
318318
}
319319
```
320320

@@ -406,9 +406,9 @@ In v2, referenced entities were always inlined, for example the used models of a
406406

407407
```json
408408
{
409-
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
409+
"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/transcriptions/9891c965-bb32-4880-b14b-6d44efb158f3",
410410
"model": {
411-
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/models/021a72d0-54c4-43d3-8254-27336ead9037"
411+
"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/models/021a72d0-54c4-43d3-8254-27336ead9037"
412412
}
413413
}
414414
```
@@ -427,9 +427,9 @@ Version v2 of the service supported logging endpoint results. To retrieve the re
427427

428428
```json
429429
{
430-
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6",
430+
"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6",
431431
"links": {
432-
"logs": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6/files/logs"
432+
"logs": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6/files/logs"
433433
}
434434
}
435435
```
@@ -440,7 +440,7 @@ Version v2 of the service supported logging endpoint results. To retrieve the re
440440
{
441441
"values": [
442442
{
443-
"self": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/6d72ad7e-f286-4a6f-b81b-a0532ca6bcaa/files/logs/2019-09-20_080000_3b5f4628-e225-439d-bd27-8804f9eed13f.wav",
443+
"self": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/6d72ad7e-f286-4a6f-b81b-a0532ca6bcaa/files/logs/2019-09-20_080000_3b5f4628-e225-439d-bd27-8804f9eed13f.wav",
444444
"name": "2019-09-20_080000_3b5f4628-e225-439d-bd27-8804f9eed13f.wav",
445445
"kind": "Audio",
446446
"properties": {
@@ -452,7 +452,7 @@ Version v2 of the service supported logging endpoint results. To retrieve the re
452452
}
453453
}
454454
],
455-
"@nextLink": "https://westus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6/files/logs?top=2&SkipToken=2!188!MDAwMDk1ITZhMjhiMDllLTg0MDYtNDViMi1hMGRkLWFlNzRlOGRhZWJkNi8yMDIwLTA0LTAxLzEyNDY0M182MzI5NGRkMi1mZGYzLTRhZmEtOTA0NC1mODU5ZTcxOWJiYzYud2F2ITAwMDAyOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ--"
455+
"@nextLink": "https://eastus.api.cognitive.microsoft.com/speechtotext/v3.0/endpoints/afa0669c-a01e-4693-ae3a-93baf40f26d6/files/logs?top=2&SkipToken=2!188!MDAwMDk1ITZhMjhiMDllLTg0MDYtNDViMi1hMGRkLWFlNzRlOGRhZWJkNi8yMDIwLTA0LTAxLzEyNDY0M182MzI5NGRkMi1mZGYzLTRhZmEtOTA0NC1mODU5ZTcxOWJiYzYud2F2ITAwMDAyOCE5OTk5LTEyLTMxVDIzOjU5OjU5Ljk5OTk5OTlaIQ--"
456456
}
457457
```
458458

@@ -519,4 +519,4 @@ Accuracy tests have been renamed to evaluations because the new name describes b
519519
## Next steps
520520

521521
* [Speech-to-text REST API](rest-speech-to-text.md)
522-
* [Speech-to-text REST API v3.0 reference](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)
522+
* [Speech-to-text REST API v3.0 reference](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0)

articles/cognitive-services/Speech-Service/rest-speech-to-text.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,10 @@ Speech-to-text REST API is used for [Batch transcription](batch-transcription.md
2222
> Speech-to-text REST API v3.1 is currently in public preview. Once it's generally available, version 3.0 of the [Speech to Text REST API](rest-speech-to-text.md) will be deprecated. For more information, see the [Migrate code from v3.0 to v3.1 of the REST API](migrate-v3-0-to-v3-1.md) guide.
2323
2424
> [!div class="nextstepaction"]
25-
> [See the Speech to Text API v3.1 preview reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/)
25+
> [See the Speech to Text API v3.1 preview reference documentation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/)
2626
2727
> [!div class="nextstepaction"]
28-
> [See the Speech to Text API v3.0 reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/)
28+
> [See the Speech to Text API v3.0 reference documentation](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/)
2929
3030
Use Speech-to-text REST API to:
3131

articles/cognitive-services/Speech-Service/speech-container-howto-on-premises.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ The following prerequisites before using Speech containers on-premises:
2626
| Container Registry access | In order for Kubernetes to pull the docker images into the cluster, it will need access to the container registry. |
2727
| Kubernetes CLI | The [Kubernetes CLI][kubernetes-cli] is required for managing the shared credentials from the container registry. Kubernetes is also needed before Helm, which is the Kubernetes package manager. |
2828
| Helm CLI | Install the [Helm CLI][helm-install], which is used to to install a helm chart (container package definition). |
29-
|Speech resource |In order to use these containers, you must have:<br><br>A _Speech_ Azure resource to get the associated billing key and billing endpoint URI. Both values are available on the Azure portal's **Speech** Overview and Keys pages and are required to start the container.<br><br>**{API_KEY}**: resource key<br><br>**{ENDPOINT_URI}**: endpoint URI example is: `https://westus.api.cognitive.microsoft.com/sts/v1.0`|
29+
|Speech resource |In order to use these containers, you must have:<br><br>A _Speech_ Azure resource to get the associated billing key and billing endpoint URI. Both values are available on the Azure portal's **Speech** Overview and Keys pages and are required to start the container.<br><br>**{API_KEY}**: resource key<br><br>**{ENDPOINT_URI}**: endpoint URI example is: `https://eastus.api.cognitive.microsoft.com/sts/v1.0`|
3030

3131
## The recommended host computer configuration
3232

0 commit comments

Comments
 (0)