You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you don't specify `sentenceCount`, the model determines the summary length. Note that `sentenceCount` is the approximation of the sentence count of the output summary, range 1 to 20.
125
+
If you don't specify `sentenceCount`, the model determines the summary length. Note that `sentenceCount` is the approximation of the sentence count of the output summary, range 1 to 20. Using sentenceCount is not recommended for abstractive summarization.
126
126
127
127
2. Make the following changes in the command where needed:
128
128
- Replace the value `your-language-resource-key` with your key.
@@ -203,11 +203,11 @@ curl -X GET https://<your-language-resource-endpoint>/language/analyze-text/jobs
203
203
204
204
The following cURL commands are executed from a BASH shell. Edit these commands with your own resource name, resource key, and JSON values.
205
205
206
-
## Query based extractive summarization
206
+
## Query based summarization
207
207
208
-
The query-based extractive summarization API is an extension to the existing document summarization API.
208
+
The query-based document summarization API is an extension to the existing document summarization API.
209
209
210
-
The biggest difference is a new `query` field in the request body (under `tasks` > `parameters` > `query`). Additionally, there's a new way to specify the preferred `summaryLength` in "buckets" of short/medium/long, which we recommend using instead of `sentenceCount`. Below is an example request:
210
+
The biggest difference is a new `query` field in the request body (under `tasks` > `parameters` > `query`). Additionally, there's a new way to specify the preferred `summaryLength` in "buckets" of short/medium/long, which we recommend using instead of `sentenceCount`, especially when using abstractive. Below is an example request:
211
211
212
212
```bash
213
213
curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-text/jobs?api-version=2023-11-15-preview \
@@ -232,50 +232,14 @@ curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-text/
The query-based abstractive summarization API is an extension to the existing document summarization API.
246
-
247
-
The biggest difference is a new `query` field in the request body (under `tasks` > `parameters` > `query`). Additionally, there's a new way to specify the preferred `summaryLength` in "buckets" of short/medium/long, which we recommend using instead of `sentenceCount`. Below is an example request:
248
-
249
-
```bash
250
-
curl -i -X POST https://<your-language-resource-endpoint>/language/analyze-text/jobs?api-version=2023-11-15-preview \
"text": "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI services, I have been working with a team of amazing scientists and engineers to turn this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear, see, and understand humans better. We believe XYZ-code enables us to fulfill our long-term vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have pretrained models that can jointly learn representations to support a broad range of downstream AI tasks, much in the way humans do today. Over the past five years, we have achieved human performance on benchmarks in conversational speech recognition, machine translation, conversational question answering, machine reading comprehension, and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious aspiration to produce a leap in AI capabilities, achieving multi-sensory and multilingual learning that is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."
Copy file name to clipboardExpand all lines: articles/ai-services/language-service/summarization/includes/regional-availability.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,6 +13,8 @@ ms.custom:
13
13
---
14
14
15
15
> [!IMPORTANT]
16
+
> One preview region, Sweden Central, showcases our latest and continually evolving LLM fine tuning techniques, where all summarization capabilities are available. We welcome customers to join our early adopter community by filling out the [access request form](https://aka.ms/applyforgatedsummarizationfeatures).
17
+
. Your valuable feedback is vital to our continuous enhancement.
16
18
> Conversation summarization is only available using:
Copy file name to clipboardExpand all lines: articles/ai-services/speech-service/pronunciation-assessment-tool.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ At the bottom of the Assessment result, two overall scores are displayed: Pronun
35
35
**Content Score**: This score provides an aggregated assessment of the content of the speech and includes three sub-aspects. This score is only available in the speaking tab for an unscripted assessment.
36
36
37
37
> [!NOTE]
38
-
> Content score is currently available on the following regions: `westcentralus`, `eastasia`, `eastus`, `northeurope`, `westeurope`, and `westus2`. All other regions will have Content score available starting from Nov 30, 2023.
38
+
> Content score is currently available on the following regions in Speech Studio: `westcentralus`, `eastasia`, `eastus`, `northeurope`, `westeurope`, and `westus2`. All other regions will have Content score available starting from Nov 30, 2023.
39
39
40
40
-**Vocabulary score**: Evaluates the speaker's effective usage of words and their appropriateness within the given context to express ideas accurately, as well as the level of lexical complexity.
41
41
-**Grammar score**: Evaluates the correctness of grammar usage and variety of sentence patterns. It considers lexical accuracy, grammatical accuracy, and diversity of sentence structures, providing a more comprehensive evaluation of language proficiency.
Copy file name to clipboardExpand all lines: articles/ai-studio/how-to/configure-managed-network.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -37,7 +37,7 @@ There are three different configuration modes for outbound traffic from the mana
37
37
| ----- | ----- | ----- |
38
38
| Allow internet outbound | Allow all internet outbound traffic from the managed VNet. | You want unrestricted access to machine learning resources on the internet, such as python packages or pretrained models.<sup>1</sup> |
39
39
| Allow only approved outbound | Outbound traffic is allowed by specifying service tags. | * You want to minimize the risk of data exfiltration, but you need to prepare all required machine learning artifacts in your private environment.</br>* You want to configure outbound access to an approved list of services, service tags, or FQDNs. |
40
-
| Disabled | Inbound and outbound traffic isn't restricted or you're using your own Azure Virtual Network to protect resources. | You want public inbound and outbound from the Azure AI, or you're handling network isolation with your own Azure VNet. |
40
+
| Disabled | Inbound and outbound traffic isn't restricted. | You want public inbound and outbound from the Azure AI. |
41
41
42
42
<sup>1</sup> You can use outbound rules with _allow only approved outbound_ mode to achieve the same result as using allow internet outbound. The differences are:
0 commit comments