You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With Custom Speech, you can evaluate and improve the accuracy of speech recognition for your applications and products. A custom speech model can be used for [real-time speech to text](speech-to-text.md), [speech translation](speech-translation.md), and [batch transcription](batch-transcription.md).
19
19
20
-
Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing a variety of common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works very well in most speech recognition scenarios.
20
+
Out of the box, speech recognition utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. The base model is pre-trained with dialects and phonetics representing various common domains. When you make a speech recognition request, the most recent base model for each [supported language](language-support.md?tabs=stt) is used by default. The base model works well in most speech recognition scenarios.
21
21
22
22
A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions.
23
23
@@ -29,20 +29,20 @@ With Custom Speech, you can upload your own data, test and train a custom model,
29
29
30
30
Here's more information about the sequence of steps shown in the previous diagram:
31
31
32
-
1.[Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices"title="Create a Speech resource"target="_blank">Speech resource</a> that you create in the Azure portal. If you will train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information.
32
+
1.[Create a project](how-to-custom-speech-create-project.md) and choose a model. Use a <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices"title="Create a Speech resource"target="_blank">Speech resource</a> that you create in the Azure portal. If you'll train a custom model with audio data, choose a Speech resource region with dedicated hardware for training audio data. See footnotes in the [regions](regions.md#speech-service) table for more information.
33
33
1.[Upload test data](./how-to-custom-speech-upload-data.md). Upload test data to evaluate the speech to text offering for your applications, tools, and products.
34
34
1.[Test recognition quality](how-to-custom-speech-inspect-data.md). Use the [Speech Studio](https://aka.ms/speechstudio/customspeech) to play back uploaded audio and inspect the speech recognition quality of your test data.
35
-
1.[Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech to text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if additional training is required.
35
+
1.[Test model quantitatively](how-to-custom-speech-evaluate-data.md). Evaluate and improve the accuracy of the speech to text model. The Speech service provides a quantitative word error rate (WER), which you can use to determine if more training is required.
36
36
1.[Train a model](how-to-custom-speech-train-model.md). Provide written transcripts and related text, along with the corresponding audio data. Testing a model before and after training is optional but recommended.
37
37
> [!NOTE]
38
38
> You pay for Custom Speech model usage and endpoint hosting, but you are not charged for training a model.
39
-
1.[Deploy a model](how-to-custom-speech-deploy-model.md). Once you're satisfied with the test results, deploy the model to a custom endpoint. With the exception of[batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
39
+
1.[Deploy a model](how-to-custom-speech-deploy-model.md). Once you're satisfied with the test results, deploy the model to a custom endpoint. Except for[batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
40
40
> [!TIP]
41
41
> A hosted deployment endpoint isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the custom speech model is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
42
42
43
43
## Responsible AI
44
44
45
-
An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
45
+
An AI system includes not only the technology, but also the people who use it, the people who will be affected by it, and the environment in which it's deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems.
46
46
47
47
*[Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/ai-services/speech-service/context/context)
48
48
*[Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/ai-services/speech-service/context/context)
description: In this quickstart, you convert speech to text continuously from a file. The service transcribes the speech and identifies one or more speakers.
In this article, asynchronous meeting transcription is demonstrated using the **RemoteMeetingTranscriptionClient** API. If you have configured meeting transcription to do asynchronous transcription and have a `meetingId`, you can obtain the transcription associated with that `meetingId` using the **RemoteMeetingTranscriptionClient** API.
19
+
20
+
## Asynchronous vs. real-time + asynchronous
21
+
22
+
With asynchronous transcription, you stream the meeting audio, but don't need a transcription returned in real-time. Instead, after the audio is sent, use the `meetingId` of `Meeting` to query for the status of the asynchronous transcription. When the asynchronous transcription is ready, you'll get a `RemoteMeetingTranscriptionResult`.
23
+
24
+
With real-time plus asynchronous, you get the transcription in real-time, but also get the transcription by querying with the `meetingId` (similar to asynchronous scenario).
25
+
26
+
Two steps are required to accomplish asynchronous transcription. The first step is to upload the audio, choosing either asynchronous only or real-time plus asynchronous. The second step is to get the transcription results.
title: Real-time meeting transcription quickstart - Speech service
3
+
titleSuffix: Azure AI services
4
+
description: In this quickstart, learn how to transcribe meetings. You can add, remove, and identify multiple participants by streaming audio to the Speech service.
You can transcribe meetings with the ability to add, remove, and identify multiple participants by streaming audio to the Speech service. You first create voice signatures for each participant using the REST API, and then use the voice signatures with the Speech SDK to transcribe meetings. See the Meeting Transcription [overview](meeting-transcription.md) for more information.
21
+
22
+
## Limitations
23
+
24
+
* Only available in the following subscription regions: `centralus`, `eastasia`, `eastus`, `westeurope`
25
+
* Requires a 7-mic circular multi-microphone array. The microphone array should meet [our specification](./speech-sdk-microphone.md).
26
+
27
+
> [!NOTE]
28
+
> The Speech SDK for C++, Java, Objective-C, and Swift support Meeting Transcription, but we haven't yet included a guide here.
0 commit comments