Skip to content

Commit e738d87

Browse files
authored
Merge pull request #5793 from aahill/june-freshness
acrolinx pass
2 parents cf05d16 + ef0e555 commit e738d87

File tree

13 files changed

+116
-118
lines changed

13 files changed

+116
-118
lines changed

articles/ai-services/language-service/concepts/role-based-access-control.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: laujan
66
manager: nitinme
77
ms.service: azure-ai-language
88
ms.topic: conceptual
9-
ms.date: 11/21/2024
9+
ms.date: 06/30/2025
1010
ms.author: lajanuar
1111
---
1212

@@ -51,7 +51,7 @@ These custom roles only apply to Language resources.
5151

5252
### Cognitive Services Language Reader
5353

54-
A user that should only be validating and reviewing the Language apps, typically a tester to ensure the application is performing well before deploying the project. They may want to review the application’s assets to notify the app developers of any changes that need to be made, but do not have direct access to make them. Readers will have access to view the evaluation results.
54+
A user that should only be validating and reviewing the Language apps, typically a tester to ensure the application is performing well before deploying the project. They might want to review the application’s assets to notify the app developers of any changes that need to be made, but do not have direct access to make them. Readers will have access to view the evaluation results.
5555

5656

5757
:::row:::
@@ -85,7 +85,7 @@ A user that should only be validating and reviewing the Language apps, typically
8585

8686
### Cognitive Services Language Writer
8787

88-
A user that is responsible for building and modifying an application, as a collaborator in a larger team. The collaborator can modify the Language apps in any way, train those changes, and validate/test those changes in the portal. However, this user shouldn’t have access to deploying this application to the runtime, as they may accidentally reflect their changes in production. They also shouldn’t be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in production. They may also create new applications under this resource, but with the restrictions mentioned.
88+
A user that is responsible for building and modifying an application, as a collaborator in a larger team. The collaborator can modify the Language apps in any way, train those changes, and validate/test those changes in the portal. However, this user shouldn’t have access to deploying this application to the runtime, as they might accidentally reflect their changes in production. They also shouldn’t be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in production. They might also create new applications under this resource, but with the restrictions mentioned.
8989

9090
:::row:::
9191
:::column span="":::
@@ -104,7 +104,7 @@ A user that is responsible for building and modifying an application, as a colla
104104
:::column-end:::
105105
:::column span="":::
106106
* All APIs under Language reader
107-
* All POST, PUT and PATCH APIs under:
107+
* All POST, PUT, and PATCH APIs under:
108108
* [Language conversational language understanding APIs](/rest/api/language/2023-04-01/conversational-analysis-authoring)
109109
* [Language text analysis APIs](/rest/api/language/2023-04-01/text-analysis-authoring)
110110
* [question answering projects](/rest/api/questionanswering/question-answering-projects)

articles/ai-services/language-service/conversational-language-understanding/concepts/multiple-languages.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: laujan
66
manager: nitinme
77
ms.service: azure-ai-language
88
ms.topic: conceptual
9-
ms.date: 11/21/2024
9+
ms.date: 06/30/2025
1010
ms.author: lajanuar
1111
---
1212

articles/ai-services/language-service/conversational-language-understanding/how-to/deploy-model.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,18 +6,18 @@ author: laujan
66
manager: nitinme
77
ms.service: azure-ai-language
88
ms.topic: how-to
9-
ms.date: 11/21/2024
9+
ms.date: 06/30/2025
1010
ms.author: lajanuar
1111
ms.custom: language-service-clu,
1212
---
1313

1414
# Deploy a model
1515

16-
Once you are satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation).
16+
Once you're satisfied with how your model performs, it's ready to be deployed, and query it for predictions from utterances. Deploying a model makes it available for use through the [prediction API](/rest/api/language/2023-04-01/conversation-analysis-runtime/analyze-conversation).
1717

1818
## Prerequisites
1919

20-
* A successfully [created project](create-project.md)
20+
* A [created project](create-project.md)
2121
* [Labeled utterances](tag-utterances.md) and successfully [trained model](train-model.md)
2222
* Reviewed the [model performance](view-model-evaluation.md) to determine how your model is performing.
2323

@@ -45,7 +45,7 @@ After you have reviewed the model's performance and decide it's fit to be used i
4545

4646
## Swap deployments
4747

48-
After you are done testing a model assigned to one deployment, you might want to assign it to another deployment. Swapping deployments involves:
48+
After you're done testing a model assigned to one deployment, you might want to assign it to another deployment. Swapping deployments involves:
4949
* Taking the model assigned to the first deployment, and assigning it to the second deployment.
5050
* taking the model assigned to second deployment and assign it to the first deployment.
5151

@@ -89,7 +89,7 @@ You can [deploy your project to multiple regions](../../concepts/custom-features
8989

9090
## Unassign deployment resources
9191

92-
When unassigning or removing a deployment resource from a project, you will also delete all the deployments that have been deployed to that resource's region.
92+
When unassigning or removing a deployment resource from a project, you'll also delete all the deployments that have been deployed to the resource's region.
9393

9494
# [Language Studio](#tab/language-studio)
9595

articles/ai-services/language-service/custom-text-classification/how-to/tag-data.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -7,14 +7,14 @@ author: laujan
77
manager: nitinme
88
ms.service: azure-ai-language
99
ms.topic: how-to
10-
ms.date: 11/21/2024
10+
ms.date: 06/30/2025
1111
ms.author: lajanuar
1212
ms.custom: language-service-custom-classification
1313
---
1414

1515
# Label text data for training your model
1616

17-
Before training your model you need to label your documents with the classes you want to categorize them into. Data labeling is a crucial step in development lifecycle; in this step you can create the classes you want to categorize your data into and label your documents with these classes. This data will be used in the next step when training your model so that your model can learn from the labeled data. If you already have labeled data, you can directly [import](create-project.md) it into your project but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md).
17+
Before training your model, you need to label your documents with the classes you want to categorize them into. Data labeling is a crucial step in development lifecycle; in this step you can create the classes you want to categorize your data into and label your documents with these classes. This data will be used in the next step when training your model so that your model can learn from the labeled data. If you already labeled your data, you can directly [import](create-project.md) it into your project but you need to make sure that your data follows the [accepted data format](../concepts/data-formats.md).
1818

1919
Before creating a custom text classification model, you need to have labeled data first. If your data isn't labeled already, you can label it in the [Language Studio](https://aka.ms/languageStudio). Labeled data informs the model how to interpret text, and is used for training and evaluation.
2020

@@ -23,19 +23,19 @@ Before creating a custom text classification model, you need to have labeled dat
2323
Before you can label data, you need:
2424

2525
* [A successfully created project](create-project.md) with a configured Azure blob storage account,
26-
* Documents containing text data that have [been uploaded](design-schema.md#data-preparation) to your storage account.
26+
* Documents containing the [uploaded](design-schema.md#data-preparation) text data in your storage account.
2727

2828
See the [project development lifecycle](../overview.md#project-development-lifecycle) for more information.
2929

3030
## Data labeling guidelines
3131

32-
After [preparing your data, designing your schema](design-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which documents will be associated with the classes you need. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels will be stored in the JSON file in your storage container that you've connected to this project.
32+
After [preparing your data, designing your schema](design-schema.md) and [creating your project](create-project.md), you will need to label your data. Labeling your data is important so your model knows which documents will be associated with the classes you need. When you label your data in [Language Studio](https://aka.ms/languageStudio) (or import labeled data), these labels are stored in the JSON file in your storage container that you've connected to this project.
3333

3434
As you label your data, keep in mind:
3535

3636
* In general, more labeled data leads to better results, provided the data is labeled accurately.
3737

38-
* There is no fixed number of labels that can guarantee your model will perform the best. Model performance on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend 50 labeled documents per class.
38+
* There is no fixed number of labels that can guarantee your model performs the best. Model performance on possible ambiguity in your [schema](design-schema.md), and the quality of your labeled data. Nevertheless, we recommend 50 labeled documents per class.
3939

4040
## Label your data
4141

@@ -61,7 +61,7 @@ Use the following steps to label your data:
6161

6262
# [Multi label classification](#tab/multi-classification)
6363

64-
**Multi label classification**: your file can be labeled with multiple classes, you can do so by selecting all applicable check boxes next to the classes you want to label this document with.
64+
**Multi label classification**: your file can be labeled with multiple classes. You can do so by selecting all applicable check boxes next to the classes you want to label this document with.
6565

6666
:::image type="content" source="../media/multiple.png" alt-text="A screenshot showing the multiple label classification tag page." lightbox="../media/multiple.png":::
6767

@@ -77,24 +77,24 @@ Use the following steps to label your data:
7777

7878
6. In the right side pane under the **Labels** pivot you can find all the classes in your project and the count of labeled instances per each.
7979

80-
7. In the bottom section of the right side pane you can add the current file you are viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they are used for model training and evaluation.
80+
7. In the bottom section of the right side pane you can add the current file you're viewing to the training set or the testing set. By default all the documents are added to your training set. Learn more about [training and testing sets](train-model.md#data-splitting) and how they're used for model training and evaluation.
8181

8282
> [!TIP]
83-
> If you are planning on using **Automatic** data splitting use the default option of assigning all the documents into your training set.
83+
> If you're planning on using **Automatic** data splitting, use the default option of assigning all the documents into your training set.
8484
8585
8. Under the **Distribution** pivot you can view the distribution across training and testing sets. You have two options for viewing:
8686
* *Total instances* where you can view count of all labeled instances of a specific class.
8787
* *documents with at least one label* where each document is counted if it contains at least one labeled instance of this class.
8888

89-
9. While you're labeling, your changes will be synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, select **Save labels** button at the bottom of the page.
89+
9. While you're labeling, your changes are synced periodically, if they have not been saved yet you will find a warning at the top of your page. If you want to save manually, select **Save labels** button at the bottom of the page.
9090

9191
## Remove labels
9292

9393
If you want to remove a label, uncheck the button next to the class.
9494

9595
## Delete or classes
9696

97-
To delete a class, select the delete icon next to the class you want to remove. Deleting a class will remove all its labeled instances from your dataset.
97+
To delete a class, select the icon next to the class you want to remove. Deleting a class will remove all its labeled instances from your dataset.
9898

9999
## Next steps
100100

articles/ai-services/language-service/custom-text-classification/tutorials/triage-email.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,11 +7,11 @@ author: laujan
77
manager: nitinme
88
ms.service: azure-ai-language
99
ms.topic: tutorial
10-
ms.date: 11/21/2024
10+
ms.date: 06/30/2025
1111
ms.author: lajanuar
1212
---
1313

14-
# Tutorial: Triage incoming emails with power automate
14+
# Tutorial: Triage incoming emails with Power Automate
1515

1616
In this tutorial you will categorize and triage incoming email using custom text classification. Using this [Power Automate](/power-automate/getting-started) flow, when a new email is received, its contents will have a classification applied, and depending on the result, a message will be sent to a designated channel on [Microsoft Teams](https://www.microsoft.com/microsoft-teams).
1717

@@ -27,7 +27,7 @@ In this tutorial you will categorize and triage incoming email using custom text
2727

2828
## Create a Power Automate flow
2929

30-
1. [Sign in to power automate](https://make.powerautomate.com/)
30+
1. [Sign in to Power Automate](https://make.powerautomate.com/)
3131

3232
2. From the left side menu, select **My flows** and create a **Automated cloud flow**
3333

articles/ai-services/language-service/language-detection/language-support.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: laujan
66
manager: nitinme
77
ms.service: azure-ai-language
88
ms.topic: conceptual
9-
ms.date: 11/21/2024
9+
ms.date: 06/30/2025
1010
ms.author: lajanuar
1111
ms.custom: language-service-language-detection, ignite-2024
1212
---

articles/ai-services/language-service/orchestration-workflow/quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: laujan
66
manager: nitinme
77
ms.service: azure-ai-language
88
ms.topic: quickstart
9-
ms.date: 11/21/2024
9+
ms.date: 06/30/2025
1010
ms.author: lajanuar
1111
ms.custom: language-service-clu, mode-other
1212
zone_pivot_groups: usage-custom-language-features

articles/ai-services/language-service/question-answering/concepts/azure-resources.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ ms.service: azure-ai-language
55
ms.topic: conceptual
66
author: laujan
77
ms.author: lajanuar
8-
ms.date: 11/21/2024
8+
ms.date: 06/30/2025
99
ms.custom: language-service-question-answering
1010
---
1111

@@ -34,7 +34,7 @@ Typically there are three parameters you need to consider:
3434

3535
* The throughput for custom question answering is currently capped at 10 text records per second for both management APIs and prediction APIs.
3636

37-
* This should also influence your **Azure AI Search** SKU selection, see more details [here](/azure/search/search-sku-tier). Additionally, you might need to adjust Azure AI Search [capacity](/azure/search/search-capacity-planning) with replicas.
37+
* This should also influence your **Azure AI Search** selection, see more details [here](/azure/search/search-sku-tier). Additionally, you might need to adjust Azure AI Search [capacity](/azure/search/search-capacity-planning) with replicas.
3838

3939
* **Size and the number of projects**: Choose the appropriate [Azure search SKU](https://azure.microsoft.com/pricing/details/search/) for your scenario. Typically, you decide the number of projects you need based on number of different subject domains. One subject domain (for a single language) should be in one project.
4040

@@ -58,7 +58,7 @@ The following table gives you some high-level guidelines.
5858
## Recommended settings
5959

6060

61-
The throughput for custom question answering is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) SKU of Azure AI Search.
61+
The throughput for custom question answering is currently capped at 10 text records per second for both management APIs and prediction APIs. To target 10 text records per second for your service, we recommend the S1 (one instance) tier of Azure AI Search.
6262

6363

6464
## Keys in custom question answering

articles/ai-services/language-service/question-answering/how-to/authoring.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.service: azure-ai-language
66
author: laujan
77
ms.author: lajanuar
88
ms.topic: how-to
9-
ms.date: 11/21/2024
9+
ms.date: 06/30/2025
1010
---
1111

1212
# Authoring API

articles/ai-services/language-service/question-answering/how-to/chit-chat.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,20 @@
11
---
22
title: Adding chitchat to a custom question answering project
33
titleSuffix: Azure AI services
4-
description: Adding personal chitchat to your bot makes it more conversational and engaging when you create a project. Custom question answering allows you to easily add a pre-populated set of the top chitchat, into your projects.
4+
description: Adding personal chitchat to your bot makes it more conversational and engaging when you create a project. Custom question answering allows you to easily add a prepopulated set of the top chitchat, into your projects.
55
#services: cognitive-services
66
manager: nitinme
77
author: laujan
88
ms.author: lajanuar
99
ms.service: azure-ai-language
1010
ms.topic: how-to
11-
ms.date: 11/21/2024
11+
ms.date: 06/30/2025
1212
ms.custom: language-service-question-answering
1313
---
1414

1515
# Use chitchat with a project
1616

17-
Adding chitchat to your bot makes it more conversational and engaging. The chitchat feature in custom question answering allows you to easily add a pre-populated set of the top chitchat, into your project. This can be a starting point for your bot's personality, and it will save you the time and cost of writing them from scratch.
17+
Adding chitchat to your bot makes it more conversational and engaging. The chitchat feature in custom question answering allows you to easily add a prepopulated set of the top chitchat, into your project. This can be a starting point for your bot's personality, and it will save you the time and cost of writing them from scratch.
1818

1919
This dataset has about 100 scenarios of chitchat in the voice of multiple personas, like Professional, Friendly and Witty. Choose the persona that most closely resembles your bot's voice. Given a user query, custom question answering tries to match it with the closest known chitchat question and answer.
2020

@@ -70,7 +70,7 @@ To turn the views for context and metadata on and off, select **Show columns** i
7070

7171
## Add more chitchat questions and answers
7272

73-
You can add a new chitchat question pair that is not in the predefined data set. Ensure that you are not duplicating a question pair that is already covered in the chitchat set. When you add any new chitchat question pair, it gets added to your **Editorial** source. To ensure the ranker understands that this is chitchat, add the metadata key/value pair "Editorial: chitchat", as seen in the following image:
73+
You can add a new chitchat question pair that is not in the predefined data set. Ensure that you are not duplicating a question pair that is already covered in the chitchat set. When you add any new chitchat question pair, it gets added to your **Editorial** source. To ensure the ranker understands that this is chitchat, add the metadata key/value pair "Editorial: chitchat," as seen in the following image:
7474

7575
:::image type="content" source="../media/chit-chat/add-new-chit-chat.png" alt-text="Add chitchat question answer pairs" lightbox="../media/chit-chat/add-new-chit-chat.png":::
7676

0 commit comments

Comments
 (0)