Skip to content

Commit 19aa145

Browse files
Merge pull request #97796 from wolfma61/concurrent
change support channel for concurrency increase requests
2 parents 3f6f169 + f5dd2b7 commit 19aa145

File tree

1 file changed

+23
-21
lines changed
  • articles/cognitive-services/Speech-Service

1 file changed

+23
-21
lines changed

articles/cognitive-services/Speech-Service/faq-stt.md

Lines changed: 23 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ manager: nitinme
88
ms.service: cognitive-services
99
ms.subservice: speech-service
1010
ms.topic: conceptual
11-
ms.date: 10/17/2019
11+
ms.date: 12/4/2019
1212
ms.author: panosper
1313
---
1414

@@ -60,11 +60,32 @@ The old dataset and the new dataset must be combined in a single .zip file (for
6060

6161
If you have adapted and deployed a model with baseline V1.0, that deployment will remain as is. Customers can decommission the deployed model, readapt using the newer version of the baseline and redeploy.
6262

63+
**Q: Can I download my model and run it locally?**
64+
65+
**A**: Models can't be downloaded and executed locally.
66+
67+
**Q: Are my requests logged?**
68+
69+
**A**: You have a choice when you create a deployment to switch off tracing. At that point, no audio or transcriptions will be logged. Otherwise, requests are typically logged in Azure in secure storage.
70+
71+
**Q: Are my requests throttled?**
72+
73+
**A**: The REST API limits requests to 25 per 5 seconds. Details can be found in our pages for [Speech to text](speech-to-text.md).
74+
75+
**Q: How I am charged for dual channel audio?**
76+
77+
**A**: If you submit each channel separately (each channel in its own file), you will be charged per duration of file. If you submit a single file with each channel multiplexed together, then you will be charged for the duration of the single file.
78+
79+
> [!IMPORTANT]
80+
> If you have further privacy concerns that prohibit you from using the custom Speech service, contact one of the support channels.
81+
82+
## Increasing concurrency
83+
6384
**Q: What if I need higher concurrency for my deployed model than what is offered in the portal?**
6485

6586
**A**: You can scale up your model in increments of 20 concurrent requests.
6687

67-
Contact [Speech support](mailto:speechsupport@microsoft.com?subject=Request%20for%20higher%20concurrency%20for%20Speech-to-text) if you require a higher scale.
88+
With the required information, create a support request in the [Azure support portal](https://ms.portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). Do not post the information on any of the public channels (GitHub, Stackoverflow, ...) mentioned on the [support page](support.md).
6889

6990
To increase concurrency for a ***custom model***, we need the following information:
7091

@@ -93,25 +114,6 @@ or
93114
- display the `Properties` for this service,
94115
- copy the complete `Resource ID`.
95116

96-
**Q: Can I download my model and run it locally?**
97-
98-
**A**: Models can't be downloaded and executed locally.
99-
100-
**Q: Are my requests logged?**
101-
102-
**A**: You have a choice when you create a deployment to switch off tracing. At that point, no audio or transcriptions will be logged. Otherwise, requests are typically logged in Azure in secure storage.
103-
104-
**Q: Are my requests throttled?**
105-
106-
**A**: The REST API limits requests to 25 per 5 seconds. Details can be found in our pages for [Speech to text](speech-to-text.md).
107-
108-
**Q: How I am charged for dual channel audio?**
109-
110-
**A**: If you submit each channel separately (each channel in its own file), you will be charged per duration of file. If you submit a single file with each channel multiplexed together, then you will be charged for the duration of the single file.
111-
112-
> [!IMPORTANT]
113-
> If you have further privacy concerns that prohibit you from using the custom Speech service, contact one of the support channels.
114-
115117
## Importing data
116118

117119
**Q: What is the limit on the size of a dataset, and why is it the limit?**

0 commit comments

Comments
 (0)