You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/cognitive-services/language-service/summarization/how-to/conversation-summarization.md
+89-71Lines changed: 89 additions & 71 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -86,7 +86,7 @@ Conversation issue and resolution summarization also enables you to get summarie
86
86
87
87
### Get chapter titles
88
88
89
-
Conversation summarization lets you get chapter titles from input conversations. A guided example scenario is provided below:
89
+
Conversation chapter title summarization lets you get chapter titles from input conversations. A guided example scenario is provided below:
90
90
91
91
1. Copy the command below into a text editor. The BASH example uses the `\` line continuation character. If your console or terminal uses a different line continuation character, use that character.
92
92
@@ -188,47 +188,56 @@ curl -X GET https://<your-language-resource-endpoint>/language/analyze-conversat
For long conversation, the model might segment it into multiple cohesive parts, and summarize each segment. There is also a lengthy `contexts` field for each summary, which tells from which range of the input conversation we generated the summary.
@@ -337,47 +346,56 @@ curl -X GET https://<your-language-resource-endpoint>/language/analyze-conversat
Copy file name to clipboardExpand all lines: articles/cognitive-services/language-service/summarization/overview.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -47,7 +47,7 @@ As an example, consider the following paragraph of text:
47
47
48
48
*"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI Cognitive Services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pre-trained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."*
49
49
50
-
he document summarization API request is processed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](https://learn.microsoft.com/en-us/azure/cognitive-services/language-service/concepts/multilingual-emoji-support) for more information.
50
+
he document summarization API request is processed upon receipt of the request by creating a job for the API backend. If the job succeeded, the output of the API will be returned. The output will be available for retrieval for 24 hours. After this time, the output is purged. Due to multilingual and emoji support, the response may contain text offsets. See [how to process offsets](https://learn.microsoft.com/azure/cognitive-services/language-service/concepts/multilingual-emoji-support) for more information.
51
51
52
52
Using the above example, the API might return the following summarized sentences:
53
53
@@ -131,7 +131,7 @@ To use this feature, you submit raw text for analysis and handle the API output
131
131
132
132
|Development option |Description | Links |
133
133
|---------|---------|---------|
134
-
| REST API | Integrate conversation summarization into your applications using the REST API. |[Quickstart: Use conversation summarization](quickstart?tabs=conversation-summarization&pivots=rest-api.md)|
134
+
| REST API | Integrate conversation summarization into your applications using the REST API. |[Quickstart: Use conversation summarization](quickstart.md?tabs=conversation-summarization&pivots=rest-api)|
0 commit comments