You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/includes/quickstarts/call-center/azure-prerequisites.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,4 +15,4 @@ ms.author: eur
15
15
> [!IMPORTANT]
16
16
> This quickstart requires access to [conversation summarization](/azure/cognitive-services/language-service/summarization/how-to/conversation-summarization). To get access, you must submit an [online request](https://aka.ms/applyforconversationsummarization/) and have it approved.
17
17
>
18
-
> The `--languageKey` and `--languageEndpoint` values in this quickstart must correspond to a resource that's in one of the regions supported by the [conversation summarization API](https://aka.ms/convsumregions).
18
+
> The `--languageKey` and `--languageEndpoint` values in this quickstart must correspond to a resource that's in one of the regions supported by the [conversation summarization API](https://aka.ms/convsumregions): `eastus`, `northeurope`, and `uksouth`.
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/includes/quickstarts/call-center/example-output.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ The `transcription` property contains a JSON object with the results of sentimen
40
40
}
41
41
```
42
42
43
-
The `conversationAnalyticsResults` property contains a JSON object with the results of the conversation summarization analysis. Here's an example, with redactions for brevity:
43
+
The `conversationAnalyticsResults` property contains a JSON object with the results of the conversation PII and conversation summarization analysis. Here's an example, with redactions for brevity:
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/includes/quickstarts/call-center/usage-arguments.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.author: eur
7
7
---
8
8
9
9
> [!IMPORTANT]
10
-
> You can use a <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne"title="Create a Cognitive Services resource"target="_blank">Cognitive Services multi-service</a> resource or separate <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics"title="Create a Language resource"target="_blank">Language</a> and <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices"title="Create a Speech resource"target="_blank">Speech</a> resources. In either case, the `--languageKey` and `--languageEndpoint` values must correspond to a resource that's in one of the regions supported by the [conversation summarization API](https://aka.ms/convsumregions).
10
+
> You can use a <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne"title="Create a Cognitive Services resource"target="_blank">Cognitive Services multi-service</a> resource or separate <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics"title="Create a Language resource"target="_blank">Language</a> and <ahref="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices"title="Create a Speech resource"target="_blank">Speech</a> resources. In either case, the `--languageKey` and `--languageEndpoint` values must correspond to a resource that's in one of the regions supported by the [conversation summarization API](https://aka.ms/convsumregions): `eastus`, `northeurope`, and `uksouth`.
11
11
12
12
Connection options include:
13
13
@@ -21,7 +21,7 @@ Input options include:
21
21
22
22
-`--input URL`: Input audio from URL. You must set either the `--input` or `--jsonInput` option.
23
23
-`--jsonInput FILE`: Input an existing batch transcription JSON result from FILE. With this option, you only need a Language resource to process a transcription that you already have. With this option, you don't need an audio file or a Speech resource. Overrides `--input`. You must set either the `--input` or `--jsonInput` option.
24
-
-`--stereo`: Use stereo audio format. If stereo isn't specified, then mono 16khz 16 bit PCM wav files are assumed. Diarization of mono files is used to separate multiple speakers. Diarization of stereo files isn't supported, since 2-channel stereo files should already have one speaker per channel.
24
+
-`--stereo`: Indicates that the audio via ```input URL` should be in stereo format. If stereo isn't specified, then mono 16khz 16 bit PCM wav files are assumed. Diarization of mono files is used to separate multiple speakers. Diarization of stereo files isn't supported, since 2-channel stereo files should already have one speaker per channel.
25
25
-`--certificate`: The PEM certificate file. Required for C++.
26
26
27
27
Language options include:
@@ -32,4 +32,4 @@ Language options include:
32
32
Output options include:
33
33
34
34
-`--help`: Show the usage help and stop
35
-
-`--output FILE`: Output the transcription, sentiment, and conversation summaries in JSON format to a text file. For more information, see [output examples](/azure/cognitive-services/speech-service/call-center-quickstart#check-results).
35
+
-`--output FILE`: Output the transcription, sentiment, conversation PII, and conversation summaries in JSON format to a text file. For more information, see [output examples](/azure/cognitive-services/speech-service/call-center-quickstart#check-results).
0 commit comments