Skip to content

Commit 1ecb245

Browse files
Merge pull request #6504 from eric-urban/eur/batch-transcription
bulk submissions and polling considerations
2 parents 53fd19c + cf8af89 commit 1ecb245

File tree

1 file changed

+9
-1
lines changed

1 file changed

+9
-1
lines changed

articles/ai-services/speech-service/batch-transcription-create.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: eric-urban
77
ms.author: eur
88
ms.service: azure-ai-speech
99
ms.topic: how-to
10-
ms.date: 6/5/2025
10+
ms.date: 8/11/2025
1111
zone_pivot_groups: speech-cli-rest
1212
ms.custom: devx-track-csharp
1313
# Customer intent: As a user who implements audio transcription, I want create transcriptions in bulk so that I don't have to submit audio content repeatedly.
@@ -372,6 +372,14 @@ You can store the results of a batch transcription to a writable Azure Blob stor
372372

373373
If you want to store the transcription results in an Azure Blob storage container by using the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), consider using [Bring-your-own-storage (BYOS)](bring-your-own-storage-speech-resource.md). For more information, see [Use the Bring your own storage (BYOS) Azure AI Foundry resource for speech to text](bring-your-own-storage-speech-resource-speech-to-text.md).
374374

375+
## Bulk submissions and polling
376+
377+
Batch transcription is asynchronous, and requests are processed one at a time in each region. Submitting jobs at a higher rate does not speed up processing. For example, sending 600 or 6,000 requests per minute has no effect on throughput.
378+
379+
[When monitoring job status](./batch-transcription-get.md), polling every few seconds is unnecessary. If you submit multiple jobs, only the first job will be processed initially; subsequent jobs will wait until the first job completes. Polling all jobs frequently increases system load without benefit. Checking status every ten minutes is sufficient, and polling more often than once per minute is not recommended.
380+
381+
To optimize throughput for large-scale batch transcription, consider distributing your jobs across multiple supported Azure regions. This approach can help balance load and reduce overall processing time, provided your data and compliance requirements allow for multi-region usage. Review [region availability](./regions.md) and ensure your storage and resources are accessible from each region you plan to use.
382+
375383
## Related content
376384

377385
- [Learn more about batch transcription](batch-transcription.md)

0 commit comments

Comments
 (0)