Skip to content

Commit bd33207

Browse files
authored
Update real-time-transcription-js.md
add support for speechmodelendpointId
1 parent d5a48f7 commit bd33207

File tree

1 file changed

+14
-3
lines changed

1 file changed

+14
-3
lines changed

articles/communication-services/how-tos/call-automation/includes/real-time-transcription-js.md

Lines changed: 14 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,8 @@ const transcriptionOptions = {
1919
transportUrl: "",
2020
transportType: "websocket",
2121
locale: "en-US",
22-
startTranscription: false
22+
startTranscription: false,
23+
speechRecognitionModelEndpointId: "YOUR_CUSTOM_SPEECH_RECOGNITION_MODEL_ID"
2324
};
2425

2526
const options = {
@@ -41,7 +42,8 @@ const transcriptionOptions = {
4142
transportUri: "",
4243
locale: "en-US",
4344
transcriptionTransport: "websocket",
44-
startTranscription: false
45+
startTranscription: false,
46+
speechRecognitionModelEndpointId: "YOUR_CUSTOM_SPEECH_RECOGNITION_MODEL_ID"
4547
};
4648

4749
const callIntelligenceOptions = {
@@ -178,7 +180,16 @@ console.log('WebSocket server running on port 8081');
178180
For situations where your application allows users to select their preferred language, you may also want to capture the transcription in that language. To do this task, the Call Automation SDK allows you to update the transcription locale.
179181

180182
```javascript
181-
await callMedia.updateTranscription("en-US-NancyNeural");
183+
async function updateTranscriptionAsync() {
184+
const options: UpdateTranscriptionOptions = {
185+
operationContext: "updateTranscriptionContext",
186+
speechRecognitionModelEndpointId: "YOUR_CUSTOM_SPEECH_RECOGNITION_MODEL_ID"
187+
};
188+
await acsClient
189+
.getCallConnection(callConnectionId)
190+
.getCallMedia()
191+
.updateTranscription("en-au", options);
192+
}
182193
```
183194

184195
## Stop Transcription

0 commit comments

Comments
 (0)