Skip to content

Commit 2a6e078

Browse files
committed
resolve blocking issues
1 parent db8ed6a commit 2a6e078

File tree

12 files changed

+30
-123
lines changed

12 files changed

+30
-123
lines changed

articles/ai-services/speech-service/conversation-transcription.md

Lines changed: 0 additions & 93 deletions
This file was deleted.

articles/ai-services/speech-service/devices-sdk-release-notes.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ The following sections list changes in the most recent releases.
6363

6464
## Speech Devices SDK 1.5.1:
6565

66-
- Include [Conversation Transcription](./conversation-transcription.md) in the sample app.
66+
- Include conversation transcription in the sample app.
6767
- Updated the [Speech SDK](./speech-sdk.md) component to version 1.5.1. For more information, see its [release notes](./releasenotes.md).
6868

6969
## Speech Devices SDK 1.5.0: 2019-May release

articles/ai-services/speech-service/how-to-async-meeting-transcription.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
2-
title: Asynchronous Meeting Transcription - Speech service
2+
title: Asynchronous meeting transcription - Speech service
33
titleSuffix: Azure AI services
4-
description: Learn how to use asynchronous Meeting Transcription using the Speech service. Available for Java and C# only.
4+
description: Learn how to use asynchronous meeting transcription using the Speech service. Available for Java and C# only.
55
services: cognitive-services
66
manager: nitinme
77
ms.service: cognitive-services
@@ -13,9 +13,9 @@ ms.custom: cogserv-non-critical-speech, devx-track-csharp, devx-track-extended-j
1313
zone_pivot_groups: programming-languages-set-twenty-one
1414
---
1515

16-
# Asynchronous Meeting Transcription
16+
# Asynchronous meeting transcription
1717

18-
In this article, asynchronous Meeting Transcription is demonstrated using the **RemoteMeetingTranscriptionClient** API. If you have configured Meeting Transcription to do asynchronous transcription and have a `meetingId`, you can obtain the transcription associated with that `meetingId` using the **RemoteMeetingTranscriptionClient** API.
18+
In this article, asynchronous meeting transcription is demonstrated using the **RemoteMeetingTranscriptionClient** API. If you have configured meeting transcription to do asynchronous transcription and have a `meetingId`, you can obtain the transcription associated with that `meetingId` using the **RemoteMeetingTranscriptionClient** API.
1919

2020
## Asynchronous vs. real-time + asynchronous
2121

articles/ai-services/speech-service/how-to-use-meeting-transcription.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.date: 05/06/2023
1212
ms.author: eur
1313
zone_pivot_groups: acs-js-csharp-python
1414
ms.devlang: csharp, javascript
15-
ms.custom: cogserv-non-critical-speech, ignite-fall-2021
15+
ms.custom: cogserv-non-critical-speech, ignite-fall-2021, references_regions
1616
---
1717

1818
# Quickstart: Real-time meeting transcription

articles/ai-services/speech-service/includes/quickstarts/stt-diarization/cpp.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -135,8 +135,8 @@ Follow these steps to create a new console application and install the Speech SD
135135
```
136136
137137
1. Replace `katiesteve.wav` with the filepath and filename of your `.wav` file. The intent of this quickstart is to recognize speech from multiple participants in the conversation. Your audio file should contain multiple speakers. For example, you can use the [sample audio file](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/csharp/dotnet/conversation-transcription/helloworld/katiesteve.wav) provided in the Speech SDK samples repository on GitHub.
138-
> [!NOTE]
139-
> The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as `Unknown`.
138+
> [!NOTE]
139+
> The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as `Unknown`.
140140
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
141141
142142

articles/ai-services/speech-service/includes/quickstarts/stt-diarization/csharp.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -111,8 +111,8 @@ Follow these steps to create a new console application and install the Speech SD
111111
```
112112
113113
1. Replace `katiesteve.wav` with the filepath and filename of your `.wav` file. The intent of this quickstart is to recognize speech from multiple participants in the conversation. Your audio file should contain multiple speakers. For example, you can use the [sample audio file](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/csharp/dotnet/conversation-transcription/helloworld/katiesteve.wav) provided in the Speech SDK samples repository on GitHub.
114-
> [!NOTE]
115-
> The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as `Unknown`.
114+
> [!NOTE]
115+
> The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as `Unknown`.
116116
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
117117
118118
Run your new console application to start conversation transcription:

articles/ai-services/speech-service/includes/quickstarts/stt-diarization/java.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -140,8 +140,8 @@ Follow these steps to create a new console application for conversation transcri
140140
```
141141

142142
1. Replace `katiesteve.wav` with the filepath and filename of your `.wav` file. The intent of this quickstart is to recognize speech from multiple participants in the conversation. Your audio file should contain multiple speakers. For example, you can use the [sample audio file](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/csharp/dotnet/conversation-transcription/helloworld/katiesteve.wav) provided in the Speech SDK samples repository on GitHub.
143-
> [!NOTE]
144-
> The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as `Unknown`.
143+
> [!NOTE]
144+
> The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as `Unknown`.
145145
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
146146

147147
Run your new console application to start conversation transcription:

articles/ai-services/speech-service/includes/quickstarts/stt-diarization/python.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -100,8 +100,8 @@ Follow these steps to create a new console application.
100100
```
101101

102102
1. Replace `katiesteve.wav` with the filepath and filename of your `.wav` file. The intent of this quickstart is to recognize speech from multiple participants in the conversation. Your audio file should contain multiple speakers. For example, you can use the [sample audio file](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/csharp/dotnet/conversation-transcription/helloworld/katiesteve.wav) provided in the Speech SDK samples repository on GitHub.
103-
> [!NOTE]
104-
> The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as `Unknown`.
103+
> [!NOTE]
104+
> The service performs best with at least 7 seconds of continuous audio from a single speaker. This allows the system to differentiate the speakers properly. Otherwise the Speaker ID is returned as `Unknown`.
105105
1. To change the speech recognition language, replace `en-US` with another [supported language](~/articles/cognitive-services/speech-service/supported-languages.md). For example, `es-ES` for Spanish (Spain). The default language is `en-US` if you don't specify a language. For details about how to identify one of multiple languages that might be spoken, see [language identification](~/articles/cognitive-services/speech-service/language-identification.md).
106106

107107
Run your new console application to start conversation transcription:

0 commit comments

Comments
 (0)