Skip to content

Commit 81f3d21

Browse files
authored
Merge pull request #3335 from eric-urban/eur/conversation-transcription
retire conversation transcription multichannel diarization
2 parents 59096ba + 2e56a28 commit 81f3d21

15 files changed

+35
-526
lines changed

articles/ai-services/.openpublishing.redirection.ai-services.json

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -470,8 +470,6 @@
470470
"redirect_url": "/azure/ai-services/speech-service/how-to-custom-speech-create-project",
471471
"redirect_document_id": false
472472
},
473-
474-
475473
{
476474
"source_path_from_root": "/articles/ai-services/anomaly-detector/how-to/postman.md",
477475
"redirect_url": "/azure/ai-services/anomaly-detector/overview",
@@ -1066,6 +1064,21 @@
10661064
"source_path_from_root": "/articles/ai-services/translator/sovereign-clouds.md",
10671065
"redirect_url": "/azure/ai-services/translator/reference/sovereign-clouds",
10681066
"redirect_document_id": true
1067+
},
1068+
{
1069+
"source_path_from_root": "/articles/ai-services/speech-service/meeting-transcription.md",
1070+
"redirect_url": "/azure/ai-services/speech-service/multi-device-conversation",
1071+
"redirect_document_id": false
1072+
},
1073+
{
1074+
"source_path_from_root": "/articles/ai-services/speech-service/how-to-use-meeting-transcription.md",
1075+
"redirect_url": "/azure/ai-services/speech-service/multi-device-conversation",
1076+
"redirect_document_id": false
1077+
},
1078+
{
1079+
"source_path_from_root": "/articles/ai-services/speech-service/how-to-async-meeting-transcription.md",
1080+
"redirect_url": "/azure/ai-services/speech-service/multi-device-conversation",
1081+
"redirect_document_id": false
10691082
}
10701083
]
10711084
}

articles/ai-services/speech-service/how-to-async-meeting-transcription.md

Lines changed: 0 additions & 44 deletions
This file was deleted.

articles/ai-services/speech-service/how-to-use-meeting-transcription.md

Lines changed: 0 additions & 48 deletions
This file was deleted.

articles/ai-services/speech-service/includes/how-to/meeting-transcription/real-time-csharp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ This sample code does the following:
104104
* Creates a `MeetingTranscriber` using the constructor, and subscribes to the necessary events.
105105
* Adds participants to the meeting. The strings `voiceSignatureStringUser1` and `voiceSignatureStringUser2` should come as output from the steps above from the function `GetVoiceSignatureString()`.
106106
* Joins the meeting and begins transcription.
107-
* If you want to differentiate speakers without providing voice samples, enable the `DifferentiateGuestSpeakers` feature as in [Meeting Transcription Overview](../../../meeting-transcription.md).
107+
* If you want to differentiate speakers without providing voice samples, enable the `DifferentiateGuestSpeakers` feature.
108108

109109
> [!NOTE]
110110
> `AudioStreamReader` is a helper class you can get on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/quickstart/csharp/dotnet/meeting-transcription/helloworld/AudioStreamReader.cs).

articles/ai-services/speech-service/includes/how-to/meeting-transcription/real-time-javascript.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ This sample code does the following:
7272
* Creates a `MeetingTranscriber` using the constructor.
7373
* Adds participants to the meeting. The strings `voiceSignatureStringUser1` and `voiceSignatureStringUser2` should come as output from the steps above.
7474
* Registers to events and begins transcription.
75-
* If you want to differentiate speakers without providing voice samples, enable `DifferentiateGuestSpeakers` feature as in [Meeting Transcription Overview](../../../meeting-transcription.md).
75+
* If you want to differentiate speakers without providing voice samples, enable `DifferentiateGuestSpeakers` feature.
7676

7777
If speaker identification or differentiate is enabled, then even if you have already received `transcribed` results, the service is still evaluating them by accumulated audio information. If the service finds that any previous result was assigned an incorrect `speakerId`, then a nearly identical `Transcribed` result is sent again, where only the `speakerId` and `UtteranceId` are different. Since the `UtteranceId` format is `{index}_{speakerId}_{Offset}`, when you receive a `transcribed` result, you could use `UtteranceId` to determine if the current `transcribed` result is going to correct a previous one. Your client or UI logic could decide behaviors, like overwriting previous output, or to ignore the latest result.
7878

articles/ai-services/speech-service/includes/how-to/meeting-transcription/real-time-python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ Here's what the sample does:
7373
* Meeting identifier for creating meeting.
7474
* Adds participants to the meeting. The strings `voiceSignatureStringUser1` and `voiceSignatureStringUser2` should come as output from the previous steps.
7575
* Read the whole wave files at once and stream it to SDK and begins transcription.
76-
* If you want to differentiate speakers without providing voice samples, you enable the `DifferentiateGuestSpeakers` feature as in [Meeting Transcription Overview](../../../meeting-transcription.md).
76+
* If you want to differentiate speakers without providing voice samples, you enable the `DifferentiateGuestSpeakers` feature.
7777

7878
If speaker identification or differentiate is enabled, then even if you received `transcribed` results, the service is still evaluating them by accumulated audio information. If the service finds that any previous result was assigned an incorrect `speakerId`, then a nearly identical `Transcribed` result is sent again, where only the `speakerId` and `UtteranceId` are different. Since the `UtteranceId` format is `{index}_{speakerId}_{Offset}`, when you receive a `transcribed` result, you could use `UtteranceId` to determine if the current `transcribed` result is going to correct a previous one. Your client or UI logic could decide behaviors, like overwriting previous output, or to ignore the latest result.
7979

articles/ai-services/speech-service/includes/how-to/remote-meeting/csharp/examples.md

Lines changed: 0 additions & 141 deletions
This file was deleted.

0 commit comments

Comments
 (0)